In this first paper we outline what possible historic-epistemological role might have played the work of Ulisse Dini on implicit function theory in formulating the structure of differentiable manifold, via the basic work of Hassler Whitney. A detailed historiographical recognition about this Dini's work has been done. Further methodological considerations are then made as regards history of mathematics.
Errett Bishop's work in constructive mathematics is overwhelmingly regarded as a turning point for mathematics based on intuitionistic logic. It brought new life to this form of mathematics and prompted the development of new areas of research that witness today's depth and breadth of constructive mathematics. Surprisingly, notwithstanding the extensive mathematical progress since the publication in 1967 of Errett Bishop's Foundations of Constructive Analysis, there has been no corresponding advances in the philosophy of constructive mathematics Bishop style. The aim of (...) this paper is to foster the philosophical debate about this form of mathematics. I begin by considering key elements of philosophical remarks by Bishop, especially focusing on Bishop's assessment of Brouwer. I then compare these remarks with ``traditional'' philosophical arguments for intuitionistic logic and argue that the latter are in tension with Bishop's views. ``Traditional'' arguments for intuitionistic logic turn out to be also in conflict with significant recent developments in constructive mathematics. This rises pressing questions for the philosopher of mathematics, especially with regard to the possibility of offering alternative philosophical arguments for constructive mathematics. I conclude with the suggestion to look anew at Bishop's own remarks for inspiration. (shrink)
At its core this book is concerned with logic and computation with respect to the mathematical characterization of sentient biophysical structure and its behavior. -/- Three related theories are presented: The first of these provides an explanation of how sentient individuals come to be in the world. The second describes how these individuals operate. And the third proposes a method for reasoning about the behavior of individuals in groups. -/- These theories are based upon a new explanation of experience in (...) nature, the construction of senses, and motile behavior. This new approach is developed from first principles to enable a rigorous and systematic explanation of the variety of associated intelligent behaviors. -/- Alongside this development is a further account that focuses upon the nature of our work. It discusses the existential aspects of scientific inquiry, its epistemology and logic. It seeks to clarify the nature of the mathematical characterization and computation of natural behaviors, dealing with questions in the foundations of logic. It explores methodological issues related to reduction and the refinement of ideas from intuition to formal logical structure. -/- In support of this inquiry we work toward the development of a calculus for biophysical construction and its dynamics. If successful this mechanics mathematically characterizes sensory and motile behavior. -/- Upon this foundation we propose a model of apprehension and explore how its products are processed by the organism. Finally, we develop a probabilistic theory that enables us to reason about inaccessible factors in group behavior. -/- The mechanics we propose suggests the design and physical realization of a new model of computation; one in which structure and the concurrency of action are a first-order consideration. -/- We identify opportunities for experimental verification of the theory and we suggest a proof of our results in practice by the identification of this mechanism, allowing the construction of machines that experience. (shrink)
We present some aspects of the genesis of a geometric construction, which can be carried out with compass and straightedge, from the original idea to the published version (Fernández González 2016). The Midpoint Path Construction makes it possible to multiply the length of a line segment by a rational number between 0 and 1 by constructing only midpoints and a straight line. In the form of an interview, we explore the context and narrative behind the discovery, with first-hand insights by (...) its author. Finally, we discuss some general aspects of this case study in the context of philosophy of mathematical practice.´´aa. (shrink)
Financial modelling is an essential tool for studying the possibility of financial transactions. This paper argues that financial models are conventional tools widely used in formulating and establishing possibility claims about a prospective investment transaction, from a set of governing possibility assumptions. What is distinctive about financial models is that they articulate how a transaction possibly could occur in a non-actual investment scenario given a limited base of possibility conditions assumed in the model. For this reason, it is argued that (...) the epistemic contribution of financial models is that of enabling the model users to envision exactly how a prospective investment could be achieved in various ways through a detailed understanding of the available transaction mechanisms. Thus, financial models provide information about the possibility of an investment scenario by showing how a specific transaction mechanism could result from a small set of initial possibility conditions assumed in the model. (shrink)
Arguments for the effectiveness, and even the indispensability, of mathematics in scientific explanation rely on the claim that mathematics is an effective or even a necessary component in successful scientific predictions and explanations. Well-known accounts of successful mathematical explanation in physical science appeals to scientists’ ability to solve equations directly in key domains. But there are spectacular physical theories, including general relativity and fluid dynamics, in which the equations of the theory cannot be solved directly in target domains, and yet (...) scientists and mathematicians do effective work there (McLarty 2023, Elder 2023). Building on extant accounts of structural scientific explanation (Bokulich 2011, Leng 2021), I argue that philosophical accounts of the role of equations in scientific explanation need not rely on scientists’ ability to solve equations independently of their understanding of the empirical or experimental context. For instance, the process of formulating solutions to equations can involve significant appeal to information about experimental contexts (Curiel 2010) or of physically similar systems (Sterrett 2023). Working from a close analysis of work in fluid mechanics by Martin Bazant and Keith Moffatt (2005), I propose an account of heuristic structural explanation in mathematics (Einstein 1921, Pincock 2021), which explains how physical explanations can be constructed even in domains where basic equations cannot be solved directly. (shrink)
In 'Forever Finite: The Case Against Infinity' (Rond Books, 2023), the author argues that, despite its cultural popularity, infinity is not a logical concept and consequently cannot be a property of anything that exists in the real world. This article summarizes the main points in 'Forever Finite', including its overview of what debunking infinity entails for conceptual thought in philosophy, mathematics, science, cosmology, and theology.
In this article I present a disagreement between classical and constructive approaches to predicativity regarding the predicative status of so-called generalised inductive definitions. I begin by offering some motivation for an enquiry in the predicative foundations of constructive mathematics, by looking at contemporary work at the intersection between mathematics and computer science. I then review the background notions and spell out the above-mentioned disagreement between classical and constructive approaches to predicativity. Finally, I look at possible ways of defending the constructive (...) predicativity of inductive definitions. (shrink)
Models are indispensable tools of scientific inquiry, and one of their main uses is to improve our understanding of the phenomena they represent. How do models accomplish this? And what does this tell us about the nature of understanding? While much recent work has aimed at answering these questions, philosophers' focus has been squarely on models in empirical science. I aim to show that pure mathematics also deserves a seat at the table. I begin by presenting two cases: Cramér’s random (...) model of the prime numbers and the function field model of the integers. These cases show that mathematicians, like empirical scientists, rely on unrealistic models to gain understanding of complex phenomena. They also have important implications for some much-discussed theses about scientific understanding. First, modeling practices in mathematics confirm that one can gain understanding without obtaining an explanation. Second, these cases undermine the popular thesis that unrealistic models confer understanding by imparting counterfactual knowledge. (shrink)
Volume II deals with philosophy of mathematics and general philosophy of science. In discussing theoretical entities, the notion of antirealism formulated in Volume I is further elaborated: Contrary to what is usually attributed to antirealism or idealism, the author does not claim that theoretical entities do not really exist, but rather that their existence is not independent of the possibility to know about them.
The paper discusses Peano's defense and application of permanence as a principle of practice, and Hahn's further point that, even if it were a principle of logic, permanence would not eliminate all logical ambiguity. Dedicated to the memory of Mic Detlefsen.
This paper argues that Noether's axiomatic method in algebra cannot be assimilated to Weyl's late view on axiomatics, for his acquiescence to a phenomenological epistemology of correctness led Weyl to resist Noether's principle of detachment.
A principle, according to which any scientific theory can be mathematized, is investigated. That theory is presupposed to be a consistent text, which can be exhaustedly represented by a certain mathematical structure constructively. In thus used, the term “theory” includes all hypotheses as yet unconfirmed as already rejected. The investigation of the sketch of a possible proof of the principle demonstrates that it should be accepted rather a metamathematical axiom about the relation of mathematics and reality. Its investigation needs philosophical (...) means. Husserl’s phenomenology is what is used, and then the conception of “bracketing reality” is modelled to generalize Peano arithmetic in its relation to set theory in the foundation of mathematics. The obtained model is equivalent to the generalization of Peano arithmetic by means of replacing the axiom of induction with that of transfinite induction. A comparison to Mach’s doctrine is used to be revealed the fundamental and philosophical reductionism of Husserl’s phenomenology leading to a kind of Pythagoreanism in the final analysis. Accepting or rejecting the principle, two kinds of mathematics appear differing from each other by its relation to reality. Accepting the principle, mathematics has to include reality within itself in a kind of Pythagoreanism. These two kinds are called in paper correspondingly Hilbert mathematics and Gödel mathematics. The sketch of the proof of the principle demonstrates that the generalization of Peano arithmetic as above can be interpreted as a model of Hilbert mathematics into Gödel mathematics therefore showing that the former is not less consistent than the latter, and the principle is an independent axiom. An information interpretation of Hilbert mathematics is involved. It is a kind of ontology of information. Thus the problem which of the two mathematics is more relevant to our being is discussed. An information interpretation of the Schrödinger equation is involved to illustrate the above problem. (shrink)
In this article, I tackle a heretofore unnoticed difficulty with the application of Pyrrhonian scepticism to science. Sceptics can suspend belief regarding a dogmatic proposition only by setting up opposing arguments for and against that proposition. Since Sextus provides arguments exclusively against particular geometrical definitions in Adversus Mathematicos III, commentators have argued that Sextus’ method is not scepticism, but negative dogmatism. However, commentators have overlooked the fact that arguments in favour of particular geometrical definitions were absent in ancient geometry, and (...) hence unavailable to Sextus. While this might explain why they are also absent from Sextus’ text, I survey and evaluate various strategies to supply arguments in support of particular geometrical definitions. (shrink)
Mathematicians distinguish between proofs that explain their results and those that merely prove. This paper explores the nature of explanatory proofs, their role in mathematical practice, and some of the reasons why philosophers should care about them. Among the questions addressed are the following: what kinds of proofs are generally explanatory (or not)? What makes a proof explanatory? Do all mathematical explanations involve proof in an essential way? Are there really such things as explanatory proofs, and if so, how do (...) they relate to the sorts of explanation encountered in philosophy of science and metaphysics? (shrink)
This article has provided a philosophical discourse approach in deconstructing Ceteris Paribus (CP) as applied in contemporary Africa. The concept of CP, which affirm the notion of ‘all things are equal’ does not always hold true in the real world. The author has gone beyond the normal interpretation of the word shock, which is making it impossible for the CP concept to hold true in reality. The paper has unraveled critical discourses spanning corruption element as a key factor in the (...) current state of Africa’s economic malaise. It is therefore incumbent on African scholars and professionals to continue their strides in promoting critical hermeneutic space, pursued through empirical endeavours or otherwise in support of developing a philosophy that is based on pragmatism for the enhancement of economic methodology, focused on the continent’s pathway of (sustained) economic development. (shrink)
How is that when scientists need some piece of mathematics through which to frame their theory, it is there to hand? What has been called 'the unreasonable effectiveness of mathematics' sets a challenge for philosophers. Some have responded to that challenge by arguing that mathematics is essentially anthropocentric in character, whereas others have pointed to the range of structures that mathematics offers. Otavio Bueno and Steven French offer a middle way, which focuses on the moves that have to be made (...) in both the mathematics and the relevant physics in order to bring the two into appropriate relation. This relation can be captured via the inferential conception of the applicability of mathematics, which is formulated in terms of immersion, inference, and interpretation. In particular, the roles of idealisations and of surplus structure in science and mathematics respectively are brought to the fore and captured via an approach to models and theories that emphasize the partiality of the available information: the partial structures approach. The discussion as a whole is grounded in a number of case studies drawn from the history of quantum physics, and extended to contest recent claims that the explanatory role of certain mathematical structures in scientific practice supports a realist attitude towards them. The overall conclusion is that the effectiveness of mathematics does not seem unreasonable at all once close attention is paid to how it is actually applied in practice. (shrink)
The idea of rejection originated by Aristotle. The notion of rejection was introduced into formal logic by Łukasiewicz . He applied it to complete syntactic characterization of deductive systems using an axiomatic method of rejection of propositions [22, 23]. The paper gives not only genesis, but also development and generalization of the notion of rejection. It also emphasizes the methodological approach to biaspectual axiomatic method of characterization of deductive systems as acceptance (asserted) systems and rejection (refutation) systems, introduced by Łukasiewicz (...) and developed by his student Słupecki, the pioneers of the method, which becomes relevant in modern approaches to logic. (shrink)
I defend a new position in philosophy of mathematics that I call mathematical inferentialism. It holds that a mathematical sentence can perform the function of facilitating deductive inferences from some concrete sentences to other concrete sentences, that a mathematical sentence is true if and only if all of its concrete consequences are true, that the abstract world does not exist, and that we acquire mathematical knowledge by confirming concrete sentences. Mathematical inferentialism has several advantages over mathematical realism and fictionalism.
In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor’s and non-standard analysis views and is based on the Euclid’s Common Notion no. 5 “The whole is greater than the (...) part” applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals lemniscate; Aleph zero, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems. (shrink)
In this paper a parallel is drawn between the problem of epistemic access to abstract objects in mathematics and the problem of epistemic access to idealized systems in the physical sciences. On this basis it is argued that some recent and more traditional approaches to solving these problems are problematic.
The objective of this article is twofold. First, a methodological issue is addressed. It is pointed out that even if philosophers of mathematics have been recently more and more concerned with the practice of mathematics, there is still a need for a sharp deﬁnition of what the targets of a philosophy of mathematical practice should be. Three possible objects of inquiry are put forward: (1) the collective dimension of the practice of mathematics; (2) the cognitives capacities requested to the practitioners; (...) and (3) the speciﬁc forms of representation and notation shared and selected by the practitioners. Moreover, it is claimed that a broadening of the notion of ‘permissible action’ as introduced by Larvor (2012) with respect to mathematical arguments, allows for a consideration of all these three elements simultaneously. Second, a case from topology – the proof of Alexander’s theorem – is presented to illustrate a concrete analysis of a mathematical practice and to exemplify the proposed method. It is discussed that the attention to the three elements of the practice identiﬁed above brings to the emergence of philosophically relevant features in the practice of topology: the need for a revision in the deﬁnition of criteria of validity, the interest in tracking the operations that are performed on the notation, and the constant and fruitful back-and-forth from one representation to another in dealing with mathematical content. Finally, some suggestions for further research are given in the conclusions. (shrink)
Many mathematicians have cited depth as an important value in their research. However, there is no single widely accepted account of mathematical depth. This article is an attempt to bridge this gap. The strategy is to begin with a discussion of Szemerédi's theorem, which says that each subset of the natural numbers that is sufficiently dense contains an arithmetical progression of arbitrary length. This theorem has been judged deep by many mathematicians, and so makes for a good case on which (...) to focus in analyzing mathematical depth. After introducing the theorem, four accounts of mathematical depth will be considered. (shrink)
This note presents an analysis of Symbolic Knowledge from Leibniz to Husserl, a collection of works from some members of The Southern Cone Group for the Philosophy of Formal Sciences. The volume delineates an outlook of the philosophical treatments presented by Leibniz, Kant, Frege, and the Booleans, as well as by Husserl, of some questions related to the conceptual singularities of symbolic knowledge –whose standard we find in the arts of algebra and arithmetic. The book’s unity of themes and (at (...) least in part) style is examined with the aim of showing the articulation of its parts. (shrink)
Imre Lakatos' views on the philosophy of mathematics are important and they have often been underappreciated. The most obvious lacuna in this respect is the lack of detailed discussion and analysis of his 1976a paper and its implications for the methodology of mathematics, particularly its implications with respect to argumentation and the matter of how truths are established in mathematics. The most important themes that run through his work on the philosophy of mathematics and which culminate in the 1976a paper (...) are (1) the (quasi-)empirical character of mathematics and (2) the rejection of axiomatic deductivism as the basis of mathematical knowledge. In this paper Lakatos' later views on the quasi-empirical nature of mathematical theories and methodology are examined and specific attention is paid to what this view implies about the nature of mathematical argumentation and its relation to the empirical sciences. (shrink)
Recent experimental evidence from developmental psychology and cognitive neuroscience indicates that humans are equipped with unlearned elementary mathematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols are not only used to (...) express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematical cognition. (shrink)
In recent decades, experimental mathematics has emerged as a new branch of mathematics. This new branch is defined less by its subject matter, and more by its use of computer assisted reasoning. Experimental mathematics uses a variety of computer assisted approaches to verify or prove mathematical hypotheses. For example, there is “number crunching” such as searching for very large Mersenne primes, and showing that the Goldbach conjecture holds for all even numbers less than 2 × 1018. There are “verifications” of (...) hypotheses which, while not definitive proofs, provide strong support for those hypotheses, and there are proofs involving an enormous amount of computer hours, which cannot be surveyed by any one mathematician in a lifetime. There have been several attempts to argue that one or another aspect of experimental mathematics shows that mathematics now accepts empirical or inductive methods, and hence shows mathematical apriorism to be false. Assessing this argument is complicated by the fact that there is no agreed definition of what precisely experimental mathematics is. However, I argue that on any plausible account of ’experiment’ these arguments do not succeed. (shrink)
There exists a huge number of numerical methods that iteratively construct approximations to the solution y(x) of an ordinary differential equation (ODE) y′(x) = f(x,y) starting from an initial value y_0=y(x_0) and using a finite approximation step h that influences the accuracy of the obtained approximation. In this paper, a new framework for solving ODEs is presented for a new kind of a computer – the Infinity Computer (it has been patented and its working prototype exists). The new computer is (...) able to work numerically with finite, infinite, and infinitesimal numbers giving so the possibility to use different infinitesimals numerically and, in particular, to take advantage of infinitesimal values of h. To show the potential of the new framework a number of results is established. It is proved that the Infinity Computer is able to calculate derivatives of the solution y(x) and to reconstruct its Taylor expansion of a desired order numerically without finding the respective derivatives analytically (or symbolically) by the successive derivation of the ODE as it is usually done when the Taylor method is applied. Methods using approximations of derivatives obtained thanks to infinitesimals are discussed and a technique for an automatic control of rounding errors is introduced. Numerical examples are given. (shrink)
Kant's arguments for the synthetic a priori status of geometry are generally taken to have been refuted by the development of non-Euclidean geometries. Recently, however, some philosophers have argued that, on the contrary, the development of non-Euclidean geometry has confirmed Kant's views, for since a demonstration of the consistency of non-Euclidean geometry depends on a demonstration of its equi-consistency with Euclidean geometry, one need only show that the axioms of Euclidean geometry have 'intuitive content' in order to show that both (...) Euclidean and non-Euclidean geometry are bodies of synthetic a priori truths. Michael Friedman has argued that this defence presumes a polyadic conception of logic that was foreign to Kant. According to Friedman, Kant held that geometrical reasoning itself relies essentially on intuition, and that this precludes the very possibility of non-Euclidean geometry. While Friedman's characterization of Kant's views on geometrical reasoning is correct, I argue that Friedman's conclusion that non-Euclidean geometries are logically impossible for Kant is not. I argue that Kant is best understood as a proto-constructivist and that modern constructive axiomatizations (unlike Hilbert-style axiomatizations) of both Euclidean and non-Euclidean geometry capture Kant's views on the essentially constructive nature of geometrical reasoning well. (shrink)
The main tool of the arithmetization and logization of analysis in the history of nineteenth century mathematics was an informal logic of quantifiers in the guise of the “epsilon–delta” technique. Mathematicians slowly worked out the problems encountered in using it, but logicians from Frege on did not understand it let alone formalize it, and instead used an unnecessarily poor logic of quantifiers, viz. the traditional, first-order logic. This logic does not e.g. allow the definition and study of mathematicians’ uniformity concepts (...) important in analysis. Mathematicians’ stronger logic was rediscovered around 1990 as the form of independence-friendly logic which hence is not a new logic nor a further development of ordinary first-order logic but a richer version of it. (shrink)
Some aspects of Federigo Enriques mathematical philosophy thought are taken as central reference points for a critical historic-epistemological comparison between it and some of the main aspects of the philosophical thought of other his contemporary thinkers like, Gaston Bachelard and Hermann Weyl. From what will be exposed, it will be also possible to make out possible educational implications of the historic-epistemological approach.
Past and present societies world-wide have employed well over 100 distinct notational systems for representing natural numbers, some of which continue to play a crucial role in intellectual and cultural development today. The diversity of these notations has prompted the need for classificatory schemes, or typologies, to provide a systematic starting point for their discussion and appraisal. The present paper provides a general framework for assessing the efficacy of these typologies relative to certain desiderata, and it uses this framework to (...) discuss the two influential typologies of Zhang & Norman and Chrisomalis. Following this, a new typology is presented that takes as its starting point the principles by which numerical notations represent multipliers (the principles of cumulation and cipherization), and bases (those of integration, parsing, and positionality). Many different examples show that this new typology provides a more refined classification of numerical notations than the ones put forward previously. In addition, the framework provided here can be used to assess typologies not only of numerical notations, but also of many other domains. (shrink)
This paper investigates the following question: when can one reliably infer the existence of an intersection point from a diagram presenting crossing curves or lines? Two cases are considered, one from Euclid's geometry and the other from basic real analysis. I argue for the acceptability of such an inference in the geometric case but against in the analytic case. Though this question is somewhat specific, the investigation is intended to contribute to the more general question of the extent and limits (...) of reliable diagrammatic reasoning in mathematics. (shrink)
Three different ways in which systems of axioms can contribute to the discovery of new notions are presented and they are illustrated by the various ways in which lattices have been introduced in mathematics by Schröder et al. These historical episodes reveal that the axiomatic method is not only a way of systematizing our knowledge, but that it can also be used as a fruitful tool for discovering and introducing new mathematical notions. Looked at it from this perspective, the creative (...) aspect of axiomatics for mathematical practice is brought to the fore. (shrink)
Mathematical concepts are explications, in Carnap's sense, of vague or otherwise non-clear concepts; mathematical theories have an empirical and a deductive component. From this perspective, I argue that the empirical component of a mathematical theory may be tested together with the fruitfulness of its explications. Using these ideas, I furthermore give an argument for mathematical realism, based on the indispensability argument combined with a weakened version of confirmational holism.
In the present paper, I go beyond these examples by bringing into play an example that I nd more experimental in nature, namely that of the use of the so-called PSLQ algorithm in researching integer relations between numerical constants. It is the purpose of this paper to combine a historical presentation with a preliminary exploration of some philosophical aspects of the notion of experiment in experimental mathematics. This dual goal will be sought by analysing these aspects as they are presented (...) by some of the protagonists of the eld and discussing them using notions from contemporary philosophy of science. (shrink)
Peano's axiomatizations of geometry are abstract and non-intuitive in character, whereas Peano stresses his appeal to concrete spatial intuition in the choice of the axioms. This poses the problem of understanding the interrelationship between abstraction and intuition in his geometrical works. In this article I argue that axiomatization is, for Peano, a methodology to restructure geometry and isolate its organizing principles. The restructuring produces a more abstract presentation of geometry, which does not contradict its intuitive content but only puts it (...) into a particular form. (shrink)
This chapter gives a detailed study of diagram-based reasoning in Euclidean plane geometry (Books I, III), as well as an exploration how to characterise a geometric practice. First, an account is given of diagram attribution: basic geometrical claims are classified as exact (equalities, proportionalities) or co-exact (containments, contiguities); exact claims may only be inferred from prior entries in the demonstration text, but co-exact claims may be asserted based on what is seen in the diagram. Diagram control by constructions is necessary (...) for this to work. Case-branching occurs when a construction renders a diagram un-representative. The roles of diagrams in reductio arguments, and of objection in probing a demonstration, are discussed. (shrink)
In 1870 Jordan proved that the composition factors of two composition series of a group are the same. Almost 20 years later Hölder (1889) was able to extend this result by showing that the factor groups, which are quotient groups corresponding to the composition factors, are isomorphic. This result, nowadays called the Jordan-Hölder Theorem, is one of the fundamental theorems in the theory of groups. The fact that Jordan, who was working in the framework of substitution groups, was able to (...) prove only a part of this theorem is often used to emphasize the importance and even the necessity of the abstract conception of groups, which was employed by Hölder. However, as a little-known paper from 1873 reveals, Jordan had all the necessary ingredients to prove the Jordan-Hölder Theorem at his disposal (namely, composition series, quotient groups, and isomorphisms), and he also noted a connection between composition factors and corresponding quotient groups. Thus, I argue that the answer to the question posed in the title is “Yes.” It was not the lack of the abstract notion of groups which prevented Jordan from proving the Jordan-Hölder Theorem, but the fact that he did not ask the right research question that would have led him to this result. In addition, I suggest some reasons why this has been overlooked in the historiography of algebra, and I argue that, by hiding computational and cognitive complexities, abstraction has important pragmatic advantages. (shrink)
In this interesting and engaging book, Shabel offers an interpretation of Kant's philosophy of mathematics as expressed in his critical writings. Shabel's analysis is based on the insight that Kant's philosophical standpoint on mathematics cannot be understood without an investigation into his perception of mathematical practice in the seventeenth and eighteenth centuries. She aims to illuminate Kant's theory of the construction of concepts in pure intuition—the basis for his conclusion that mathematical knowledge is synthetic a priori. She does this through (...) a contextualized interpretation of his notion of mathematical construction, which she argues can be approached by looking at Euclid's Elements and Christian Wolff's mathematical textbooks. The importance of the former for her interpretation is justified by the fact that nearly all of Kant's mathematical examples in the Critique are Euclidean propositions. The importance of the latter is revealed through the fact that Wolff's textbooks were not only widely read and representative of the state of elementary mathematics during Kant's time; Kant was also intimately familiar with them. During the thirty years prior to the publication of the Critique, he used the textbooks in the college-level introductory courses in mathematics and physics that he taught.In the introduction to her book, Shabel helpfully distinguishes her approach to Kant's philosophy of mathematics from that of previous commentators. She points out that most commentators assessed Kant's thoughts on mathematics in terms of the ‘supposedly devastating effects of the discovery of non-Euclidean geometry on his theory of space’.1 Bertrand Russell, for example, criticized Kant for his lack of a proper …. (shrink)
On a traditional view, the primary role of a mathematical proof is to warrant the truth of the resulting theorem. This view fails to explain why it is very often the case that a new proof of a theorem is deemed important. Three case studies from elementary arithmetic show, informally, that there are many criteria by which ordinary proofs are valued. I argue that at least some of these criteria depend on the methods of inference the proofs employ, and that (...) standard models of formal deduction are not well-equipped to support such evaluations. I discuss a model of proof that is used in the automated deduction community, and show that this model does better in that respect. (shrink)
This paper is a contribution to the question of how aspects of science have been perceived through history. In particular, I will discuss how the contribution of axiomatics to the development of science and mathematics was viewed in 20th century philosophy of science and philosophy of mathematics. It will turn out that in connection with scientiﬁc methodology, in particular regarding its use in the context of discovery, axiomatics has received only very little attention. This is a rather surprising result, since (...) axiomatizations have been employed extensively in mathematics, science, and also by the philosophers themselves. (shrink)
With respect to the confirmation of mathematical propositions, proof possesses an epistemological authority unmatched by other means of confirmation. This paper is an investigation into why this is the case. I make use of an analysis drawn from an early reliability perspective on knowledge to help make sense of mathematical proofs singular epistemological status.