In this research I will focus on a basis for a formal model based on an alternative kind of logic invented by Hans Götzsche: Occurrence Logic (Occ Log), which is not based on truth values and truth functionality. Also, I have taken into account tense logic developed and elaborated by A. N. Prior. In this article I will provide a conceptual and logical foundation for formal Occurrence Logic based on symbolic logic and will illustrate the most important relations (...) between symbolic logic and Occurrence Logic. There will be a special need to focus on arguments based on symbolic occurrence logic. The interrelationships and dependencies between symbolic logic and Occ Log could be conducive to an approach that can express and analyse ‘the occurrences of the world’s descriptions based on various logical principles at different moments’. Symbolic Occ Log checks the occurrence conditioning and the priority of occurrences by employing occurrence values. I shall conclude that the desired approach could gradually support me in providing a truth-functional independent first-order predicate calculus that could work on semantic analysis of different propositions within world descriptions. (shrink)
Perfect for students with no background in logic or philosophy, Simple Formal Logic provides a full system of logic adequate to handle everyday and philosophical reasoning. By keeping out artificial techniques that aren’t natural to our everyday thinking process, Simple Formal Logic trains students to think through formal logical arguments for themselves, ingraining in them the habits of sound reasoning. Simple Formal Logic features: a companion website with abundant exercise worksheets, study supplements (including flashcards for symbolizations (...) and for deduction rules), and instructor’s manual two levels of exercises for beginning and more advanced students a glossary of terms, abbreviations and symbols. This book arose out of a popular course that the author has taught to all types of undergraduate students at Loyola University Chicago. He teaches formal logic without the artificial methods–methods that often seek to solve farfetched logical problems without any connection to everyday and philosophical argumentation. The result is a book that teaches easy and more intuitive ways of grappling with formal logic–and is intended as a rigorous yet easy-to-follow first course in logical thinking for philosophy majors and non-philosophy majors alike. (shrink)
There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formalsymbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of (...) their shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their categorical representations. Higher-order symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations. Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories ' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded. (shrink)
Recent experimental evidence from developmental psychology and cognitive neuroscience indicates that humans are equipped with unlearned elementary mathematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols are not only used (...) to express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematical cognition. (shrink)
Although Kant (1998) envisaged a prominent role for logic in the argumentative structure of his Critique of Pure Reason, logicians and philosophers have generally judged Kantgeneralformaltranscendental logics is a logic in the strict formal sense, albeit with a semantics and a definition of validity that are vastly more complex than that of first-order logic. The main technical application of the formalism developed here is a formal proof that Kants logic is after all a distinguished subsystem of first-order logic, (...) namely what is known as geometric logic. (shrink)
Completed in 1983, this work culminates nearly half a century of the late Alfred Tarski's foundational studies in logic, mathematics, and the philosophy of science. Written in collaboration with Steven Givant, the book appeals to a very broad audience, and requires only a familiarity with first-order logic. It is of great interest to logicians and mathematicians interested in the foundations of mathematics, but also to philosophers interested in logic, semantics, algebraic logic, or the methodology of the deductive sciences, and to (...) computer scientists interested in developing very simple computer languages rich enough for mathematical and scientific applications. The authors show that set theory and number theory can be developed within the framework of a new, different, and simple equational formalism, closely related to the formalism of the theory of relation algebras. There are no variables, quantifiers, or sentential connectives. Predicates are constructed from two atomic binary predicates (which denote the relations of identity and set-theoretic membership) by repeated applications of four operators that are analogues of the well-known operations of relative product, conversion, Boolean addition, and complementation. All mathematical statements are expressed as equations between predicates. There are ten logical axiom schemata and just one rule of inference: the one of replacing equals by equals, familiar from high school algebra. Though such a simple formalism may appear limited in its powers of expression and proof, this book proves quite the opposite. The authors show that it provides a framework for the formalization of practically all known systems of set theory, and hence for the development of all classical mathematics. The book contains numerous applications of the main results to diverse areas of foundational research: propositional logic; semantics; first-order logics with finitely many variables; definability and axiomatizability questions in set theory, Peano arithmetic, and real number theory; representation and decision problems in the theory of relation algebras; and decision problems in equational logic. (shrink)
Symbolic arithmetic is fundamental to science, technology and economics, but its acquisition by children typically requires years of effort, instruction and drill1,2. When adults perform mental arithmetic, they activate nonsymbolic, approximate number representations3,4, and their performance suffers if this nonsymbolic system is impaired5. Nonsymbolic number representations also allow adults, children, and even infants to add or subtract pairs of dot arrays and to compare the resulting sum or difference to a third array, provided that only approximate accuracy is required6–10. Here (...) we report that young children, who have mastered verbal counting and are on the threshold of arithmetic instruction, can build on their nonsymbolic number system to perform symbolic addition and subtraction11–15. Children across a broad socio-economic spectrum solved symbolic problems involving approximate addition or subtraction of large numbers, both in a laboratory test and in a school setting. Aspects of symbolic arithmetic therefore lie within the reach of children who have learned no algorithms for manipulating numerical symbols. Our findings help to delimit the sources of children’s difficulties learning symbolic arithmetic, and they suggest ways to enhance children’s engagement with formal mathematics. We presented children with approximate symbolic arithmetic problems in a format that parallels previous tests of non-symbolic arithmetic in preschool children8,9. In the first experiment, five- to six-year-old children were given problems such as ‘‘If you had twenty-four stickers and I gave you twenty-seven more, would you have more or less than thirty-five stickers?’’. Children performed well above chance (65.0%, t1952.77, P 5 0.012) without resorting to guessing or comparison strategies that could serve as alternatives to arithmetic. Children who have been taught no symbolic arithmetic therefore have some ability to perform symbolic addition problems. The children’s performance nevertheless fell short of performance on non-symbolic arithmetic tasks using equivalent addition problems with numbers presented as arrays of dots and with the addition operation conveyed by successive motions of the dots into a box (71.3% correct, F1,345 4.26, P 5 0.047)8.. (shrink)
Brimming with visual examples of concepts, derivation rules, and proof strategies, this introductory text is ideal for students with no previous experience in logic. Students will learn translation both from formal language into English and from English into formal language; how to use truth trees and truth tables to test propositions for logical properties; and how to construct and strategically use derivation rules in proofs.
This work has been selected by scholars as being culturally important and is part of the knowledge base of civilization as we know it. This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity has a copyright on the body of the work. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and (...) made generally available to the public. To ensure a quality reading experience, this work has been proofread and republished using a format that seamlessly blends the original graphical elements with text in an easy-to-read typeface. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant. (shrink)
This brief paperback is designed for symbolic/formal logic courses. It features the tree method proof system developed by Jeffrey. The new edition contains many more examples and exercises and is reorganized for greater accessibility.
What role, if any, does formal logic play in characterizing epistemically rational belief? Traditionally, belief is seen in a binary way - either one believes a proposition, or one doesn't. Given this picture, it is attractive to impose certain deductive constraints on rational belief: that one's beliefs be logically consistent, and that one believe the logical consequences of one's beliefs. A less popular picture sees belief as a graded phenomenon.
Logic: Techniques of Formal Reasoning, 2/e is an introductory volume that teaches students to recognize and construct correct deductions. It takes students through all logical steps--from premise to conclusion--and presents appropriate symbols and terms, while giving examples to clarify principles. Logic, 2/e uses models to establish the invalidity of arguments, and includes exercise sets throughout, ranging from easy to challenging. Solutions are provided to selected exercises, and historical remarks discuss major contributions to the theories covered.
Natural Formalization proposes a concrete way of expanding proof theory from the meta-mathematical investigation of formal theories to an examination of “the concept of the specifically mathematical proof.” Formal proofs play a role for this examination in as much as they reflect the essential structure and systematic construction of mathematical proofs. We emphasize three crucial features of our formal inference mechanism: (1) the underlying logical calculus is built for reasoning with gaps and for providing strategic directions, (2) (...) the mathematical frame is a definitional extension of Zermelo–Fraenkel set theory and has a hierarchically organized structure of concepts and operations, and (3) the construction of formal proofs is deeply connected to the frame through rules for definitions and lemmas. To bring these general ideas to life, we examine, as a case study, proofs of the Cantor–Bernstein Theorem that do not appeal to the principle of choice. A thorough analysis of the multitude of “different” informal proofs seems to reduce them to exactly one. The natural formalization confirms that there is one proof, but that it comes in two variants due to Dedekind and Zermelo, respectively. In this way it enhances the conceptual understanding of the represented informal proofs. The formal, computational work is carried out with the proof search system AProS that serves as a proof assistant and implements the above inference mechanism; it can be fully inspected at (see link below). (shrink)
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and (...) neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of “lying” tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax. (shrink)
Logic: Techniques of Formal Reasoning, 2/e is an introductory volume that teaches students to recognize and construct correct deductions. It takes students through all logical steps--from premise to conclusion--and presents appropriate symbols and terms, while giving examples to clarify principles. Logic, 2/e uses models to establish the invalidity of arguments, and includes exercise sets throughout, ranging from easy to challenging. Solutions are provided to selected exercises, and historical remarks discuss major contributions to the theories covered.
Logic: Techniques of Formal Reasoning, 2/e is an introductory volume that teaches students to recognize and construct correct deductions. It takes students through all logical steps--from premise to conclusion--and presents appropriate symbols and terms, while giving examples to clarify principles. Logic, 2/e uses models to establish the invalidity of arguments, and includes exercise sets throughout, ranging from easy to challenging. Solutions are provided to selected exercises, and historical remarks discuss major contributions to the theories covered.
(This is for the Cambridge Handbook of Analytic Philosophy, edited by Marcus Rossberg) In this handbook entry, I survey the different ways in which formal mathematical methods have been applied to philosophical questions throughout the history of analytic philosophy. I consider: formalization in symbolic logic, with examples such as Aquinas’ third way and Anselm’s ontological argument; Bayesian confirmation theory, with examples such as the fine-tuning argument for God and the paradox of the ravens; foundations of mathematics, with examples such (...) as Hilbert’s programme and Gödel’s incompleteness theorems; social choice theory, with examples such as Condorcet’s paradox and Arrow’s theorem; ‘how possibly’ results, with examples such as Condorcet’s jury theorem and recent work on intersectionality theory; and the application of advanced mathematics in philosophy, with examples such as accuracy-first epistemology. (shrink)
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: the (...) parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
Against the view that symbol-based semiosis is a human cognitive uniqueness, we have argued that non-human primates such as African vervet monkeys possess symbolic competence, as formally defined by Charles S. Peirce. Here I develop this argument by showing that the equivocal role ascribed to symbols by “folk semiotics” stems from an incomplete application of the Peircean logical framework for the classification of signs, which describes three kinds of symbols: rheme, dicent and argument. In an attempt to advance in (...) the classifying semiotic processes, Peirce proposed several typologies, with different degrees of refinement. Around 1903, he developed a division into ten classes. According to this typology, symbols can be further analysed in three subclasses (rheme, dicent, argument). I proceed to demonstrate that vervet monkeys employ dicent symbols. There are remarkable implications of this argument since ‘symbolic species theory’ fails to explore the vast Peircean semiotic philosophy to frame questions regarding the emergence and evolution of symbolic processes. (shrink)
The advantages and disadvantages of formalization in philosophy are summarized. It is concluded that formalized philosophy is an endangered speciality that needs to be revitalized and to increase its interactions with non-formalized philosophy. The enigmatic style that is common in philosophical logic must give way to explicit discussions of the problematic relationship between formal models and the philosophical concepts and issues that motivated their development.
We study, in an abstract and general framework, formal representations of dependence and groundedness which occur in semantic theories of truth. Our goals are: (a) to relate the different ways in which groundedness is defined according to the way dependence is represented; and (b) to represent different notions of dependence as instances of a suitable generalisation of the mathematical notion of functional dependence.
We investigate incomplete symbols, i.e. definite descriptions with scope-operators. Russell famously introduced definite descriptions by contextual definitions; in this article definite descriptions are introduced by rules in a specific calculus that is very well suited for proof-theoretic investigations. That is to say, the phrase ‘incomplete symbols’ is formally interpreted as to the existence of an elimination procedure. The last section offers semantical tools for interpreting the phrase ‘no meaning in isolation’ in a formal way.
The symbol grounding problem is concerned with the question of how the knowledge used in AI programs, expressed as tokens in one form or another or simply symbols, could be grounded to the outside world. By grounding the symbols, it is meant that the system will know the actual objects, events, or states of affairs in the world to which each symbol refers and thus be worldly-wise. Solving this problem, it was hoped, would enable the program to understand (...) its own action and hence be truly intelligent ). The problem becomes more acute after a challenge posed by Searle in his now famous Chinese Room Gedanken experiment. Searle argued that no AI programs can be said to understand or have other cognitive states, if all they do is formalsymbol manipulation. (shrink)