In this study two strands of inferentialism are brought together: the philosophical doctrine of Brandom, according to which meanings are generally inferential roles, and the logical doctrine prioritizing proof-theory over model theory and approaching meaning in logical, especially proof-theoretical terms.
This book offers a comprehensive account of logic that addresses fundamental issues concerning the nature and foundations of the discipline. The authors claim that these foundations can not only be established without the need for strong metaphysical assumptions, but also without hypostasizing logical forms as specific entities. They present a systematic argument that the primary subject matter of logic is our linguistic interaction rather than our private reasoning and it is thus misleading to see logic as revealing "the laws of (...) thought". In this sense, fundamental logical laws are implicit to our "language games" and are thus more similar to social norms than to the laws of nature. Peregrin and Svoboda also show that logical theories, despite the fact that they rely on rules implicit to our actual linguistic practice, firm up these rules and make them explicit. By carefully scrutinizing the project of logical analysis, the authors demonstrate that logical rules can be best seen as products of the so called reflective equilibrium. They suggest that we can profit from viewing languages as "inferential landscapes" and logicians as "geographers" who map them and try to pave safe routes through them. This book is an essential resource for scholars and researchers engaged with the foundations of logical theories and the philosophy of language. (shrink)
There may be various reasons for claiming that meaning is normative, and additionally, very different senses attached to the claim. However, all such claims have faced fierce resistance from those philosophers who insist that meaning is not normative in any nontrivial sense of the word. In this paper I sketch one particular approach to meaning claiming its normativity and defend it against the anti-normativist critique: namely the approach of Brandomian inferentialism. However, my defense is not restricted to inferentialism in any (...) narrow sense for it encompasses a much broader spectrum of approaches to meaning, connected with the Wittgensteinian and especially Sellarsian view of language as an essentially rule-governed enterprise; and indeed I refrain from claiming that the version of inferentialism I present here is in every detail the version developed by Brandom. (shrink)
While according to the inferentialists, meaning is always a kind of inferential role, proponents of other approaches to semantics often doubt that actual meanings, as they see them, can be generally reduced to inferential roles. In this paper we propose a formal framework for considering the hypothesis of the.
Variations on the argument “Inferences are moves from meaningful statements to meaningful statements; hence the meanings cannot be inferential roles” are often used as knock-down argument against inferentialism. In this short paper I indicate that the argument is simply a non sequitur.
Doing Worlds with Words throws light on the problem of meaning as the meeting point of linguistics, logic and philosophy, and critically assesses the possibilities and limitations of elucidating the nature of meaning by means of formal logic, model theory and model-theoretical semantics. The main thrust of the book is to show that it is misguided to understand model theory metaphysically and so to try to base formal semantics on something like formal metaphysics; rather, the book states that model theory (...) and similar tools of the analysis of language should be understood as capturing the semantically relevant, especially inferential, structure of language. From this vantage point, the reader gains a new light on many of the traditional concepts and problems of logic and philosophy of language, such as meaning, reference, truth and the nature of formal logic. (shrink)
The article addresses two closely related questions: What are the criteria of adequacy of logical formalization of natural language arguments, and what gives logic the authority to decide which arguments are good and which are bad? Our point of departure is the criticism of the conception of logical formalization put forth, in a recent paper, by M. Baumgartner and T. Lampert. We argue that their account of formalization as a kind of semantic analysis brings about more problems than it solves. (...) We also argue that the criteria of adequate formalization need not be based on truth conditions associated with logical formulas; in our view, they are better based on structural (inferential) grounds. We then put forward our own version of the criteria. The upshot of the discussion that follows is that the quest for an adequate formalization in a suitable logical language is best conceived of as the search for a Goodmanian reflective equilibrium. (shrink)
The heyday of discussions initiated by Searle's claim that computers have syntax, but no semantics has now past, yet philosophers and scientists still tend to frame their views on artificial intelligence in terms of syntax and semantics. In this paper I do not intend to take part in these discussions; my aim is more fundamental, viz. to ask what claims about syntax and semantics in this context can mean in the first place. And I argue that their sense is so (...) unclear that that their ability to act as markers within any disputes on artificial intelligence is severely compromised; and hence that their employment brings us nothing more than an illusion of explanation. (shrink)
The perennial question – What is meaning? – receives many answers. In this paper I present and discuss inferentialism – a recent approach to semantics based on the thesis that to have ( such and such ) a meaning is to be governed by ( such and such ) a cluster of inferential rules . I point out that this thesis presupposes that looking for meaning requires seeing language as a social institution (rather than, say, a psychological reality). I also (...) indicate that this approach may be seen as a new embodiment of the old ideas of structuralism. (shrink)
In a remarkable early paper, Wilfrid Sellars warned us that if we cease to recognize rules, we may well find ourselves walking on four feet; and it is obvious that within human communities, the phenomenon of rules is ubiquitous. Yet from the viewpoint of the sciences, rules cannot be easily accounted for. Sellars himself, during his later years, managed to put a lot of flesh on the normative bones from which he assembled the remarkable skeleton of the early paper; and (...) his followers too. However, what they say is somewhat divergent; and therefore my aim in this paper is to concentrate on the very concept of rule and analyse it in the context of the question what it is about us humans that makes us special. (shrink)
The topic of this paper is the question whether there is a logic which could be justly called the logic of inference. It may seem that at least since Prawitz, Dummett and others demonstrated the proof-theoretical prominency of intuitionistic logic, the forthcoming answer is that it is this logic that is the obvious choice for the accolade. Though there is little doubt that this choice is correct (provided that inference is construed as inherently single-conclusion and complying with the Gentzenian structural (...) rules), I do not think that the usual justification of it is satisfactory. Therefore, I will first try to clarify what exactly is meant by the question, and then sketch a conceptual framework in which it can be reasonably handled. I will introduce the concept of 'inferentially native' logical operators (those which explicate inferential properties) and I will show that the axiomatization of these operators leads to the axiomatic system of intuitionistic logic. Finally, I will discuss what modifications of this answer enter the picture when more general notions of inference are considered. (shrink)
In this paper we put forward and defend a view of the nature of logic that we call moderate anti-exceptionalism. In the first part of the paper we focus on the problem of genuine logical validity and consequence. We make use of examples from current debates to show that attempts to pinpoint the one and only authentic logic inevitably either yield irrefutable theories or lead to dead ends. We then outline a thoroughly naturalist account of logical consequence as grounded in (...) rules implicit in human linguistic practices. We insist that there are only two existing kinds of language: natural languages, and artificial languages that have been forged by us. There is thus no room for a "genuine" language and hence for "genuine" logic. We conclude that though logical theories are established—and are liable to criticism—in a similar fashion as those of the sciences, and in this sense logic is not exceptional, to fulfill its mission logic must lay a claim to normative authority over our argumentation and reasoning, which makes its methodology somewhat special. Logical theory is not meant to provide just an explanation, the standards it establishes serve also as a tool, providing for a reinforcement of our rational communication. (shrink)
V knize konfrontuji běžné pojetí jazyka, podle kterého je význam záležitostí vztahu slovo-věc, se strukturalistickým pohledem, podle kterého význam nemůže existovat bez toho, aby byly výrazy určitým způsobem provázány mezi sebou. Ukazuji, že takový strukturalismus není jen věcí Ferdinanda de Saussura, ale že se vyskytuje (pod jménem holismus) i v základech (post)analytické filosofie Quina, Davidsona, Sellarse a Brandoma. Ukazuji také, že není neslučitelný s formálně-logickým přístupem k významu, jaký byl rozpracován Carnapem, Montaguem a dalšími.
The entire development of modern logic is characterized by various forms of confrontation of what has come to be called proof theory with what has earned the label of model theory. For a long time the widely accepted view was that while model theory captures directly what logical formalisms are about, proof theory is merely our technical means of getting some incomplete grip on this; but in recent decades the situation has altered. Not only did proof theory expand into new (...) realms, generalizing the concept of proof in various directions; many philosophers also realized that meaning may be seen as primarily consisting in certain rules rather than in language-world links. However, the possibility of construing meaning as an inferential role is often seen as essentially compromised by the limits of prooftheoretical means. The aim of this paper is to sort out the cluster of problems besetting logical inferentialism by disentangling and clarifying one of them, namely determining the power of various inferential frameworks as measured by that of explicitly semantic ones. (shrink)
Tarskian model theory is almost universally understood as a formal counterpart of the preformal notion of semantics, of the “linkage between words and things”. The wide-spread opinion is that to account for the semantics of natural language is to furnish its settheoretic interpretation in a suitable model structure; as exemplified by Montague 1974.
The paper presents an argument against a "metaphysical" conception of logic according to which logic spells out a specific kind of mathematical structure that is somehow inherently related to our factual reasoning. In contrast, it is argued that it is always an empirical question as to whether a given mathematical structure really does captures a principle of reasoning. (More generally, it is argued that it is not meaningful to replace an empirical investigation of a thing by an investigation of its (...) a priori analyzable structure without paying due attention to the question of whether it really is the structure of the thing in question.) It is proposed to elucidate the situation by distinguishing two essentially different realms with which our reason must deal: "the realm of the natural", constituted by the things of our empirical world, and "the realm of the formal", constituted by the structures that we use as "prisms" to view, to make sense of, and to reconstruct the world. It is suggested that this vantage point may throw light on many foundational problems of logic. (shrink)
Inferentialism, which I am going to present in detail in the following sections, is the view that meanings are, roughly, roles that are acquired by types of sounds and inscriptions in virtue of their being treated according to rules of our language games, roughly in the sense in which wooden pieces acquire certain roles by being treated according the rules of chess. The most important consequences are that (i) a meaning is not an object labeled (stood for, represented ...) by (...) an expression; and that (ii) meaning is normative in the sense that to say that an expression means thus and so is to say that it should be used so and so. The founding father of inferentialism is Brandom (1994; 2000). (However, nothing in this paper hinges on the fact that the version of inferentialism defended here is identical with Brandom's). This position provokes two kinds of objections. First there are general objections towards the very normativity of meaning, which do not target especially inferentialism; these I have addressed elsewhere 1. Besides this, there are objection targeted more specifically at inferentialism. Probably the most discussed specimen of such objections is the objection - repeatedly raised especially by Jerry Fodor and Ernest LePore and others - to the effect that though meanings should be compositional, the compositionality of inferential roles is unattainable. This is the kind of objection I am going to deal with here 2. (Hand in hand with this objection then go various allegations of circularity of inferentialism, which we will also discuss.) To do this, I will exploit the long-standing comparison of language to chess, as it seems particularly helpful for making the inferentialist account of language plausible3. This comparison, to be sure, has its limits beyond which it may become severely misleading; but as long as we keep them in mind, it can serve us very well. (shrink)
In recent years, I have published a number of papers addressing various aspect of inferentialism. These papers, I believe, do provide for a relatively multifaceted picture of (my version of) this enterprise; though still a picture that is in some respects patchy. This has made me start working on this book – it should bring my ideas of various aspects and dimensions of inferentialism to a desirable synthesis. Building the individual chapters, I usually start from taking parts of my published (...) papers as basic building blocks, putting them togethether and then trying to make them fit together with each other, and with the rest of the book, seamlessly. As a result, material from the older papers gets upgraded so that the chapters no longer contain many pieces of the papers in their original form. As inferentialism is a new and unsettled matter, I am not only putting forward some new ideas, but in some cases I also have to put together new frameworks to enable me to articulate these ideas intelligibly in the first place. I think that doing this with a reasonable outcome is not possible without some feadback. I do get some from my colleagues, but I will be grateful to anybody who would like to comment on anything presented here. (shrink)
The followers of Wilfrid Sellars are often divided into “right” and “left” Sellarsians, according to whether they believe, in Mark Lance's words, that “linguistic roles constitutive of meaning and captured by dot quoted words are ‘normative all the way down.’” The present article anatomizes this division and argues that it is not easy to give it a nontrivial sense. In particular, the article argues that it is not really possible to construe it as a controversy related to ontology, and goes (...) on to argue that it is also not easy to construe it as one concerning the translatability of the normative idiom into the non-normative one. The conclusion is that the only coherent interpretation of this disagreement is as a disagreement about the possibility and desirability of assuming a standpoint “inside” our linguistic practices. (shrink)
Inferentialism is the conviction that to be meaningful in the distinctively human way, or to have a 'conceptual content', is to be governed by a certain kind of inferential rules. The term was coined by Robert Brandom as a label for his theory of language; however, it is also naturally applicable (and is growing increasingly common) within the philosophy of logic.
Logic is usually considered to be the study of logical consequence â€“ of the most basic laws governing how a statementâ€™s truth depends on the truth of other statements. Some of the pioneers of modern formal logic, notably Hilbert and Carnap, assumed that the only way to get hold of the relation of consequence was to reconstruct it as a relation of inference within a formal system built upon explicit inferential rules. Even Alfred Tarski in 1930 seemed to foresee no (...) kind of consequence other than one induced by a set of inference rules: "Let A be an arbitrary set of sentences of a particular discipline. With the help of certain operations, the so-called rules of inference , new sentences are derived from the set A , called the consequences of the set A . To establish these rules of inference, and with their help to define exactly the concept of consequence, is again a task of special metadisciplines; in the usual terminology of set theory the schema of such definition can be formulated as follows: The set of all consequences of the set A is the intersection of all sets which contain the set A and are closed under the given rules of inference." (p. 63) Thereby also the concept of truth came to be reconstructed as inferability from the empty set of premises. (More precisely, this holds only for non-empirical, necessary truth; but of course logic never set itself the task of studying empirical truth.) From this viewpoint, logic came to look as the enterprise of explication of consequence in terms of inference. (shrink)
While most theoreticians of meaning in the first half of the twentieth century subscribed to a representational theory (viewing meanings as entities stood for by the expressions), the second half of the century was marked by the rise of various versions of use-theories of meaning. The roots of this ‘pragmatist turn’ are detectable in the writings of the later Wittgenstein, the Oxford speech act theorists (Austin, Grice) and the American neopragmatists (Quine, Sellars). Though it is now rather popular (and sometimes (...) even fashionable) to invoke the use-theory of meaning, it is by far not so popular to inquire what such a theory really is. In this paper we try to give at least a part of the answer, whereby we find out that the usual conception of such a theory is unsatisfactory. We propose that for an improvement we must, together with Wittgenstein and Sellars, conceive language as a (tool of a) rule-based activity, which enables us to replace the concept of disposition, usually constituting the backbone of the use-theory, by the concept of propriety. The resulting normative version of the use-theory then becomes the investigation of the rules which expressions acquire vis-`a-vis the rules of the relevant language games – especially of the rules of inference. (shrink)
Imagine a Paleolithic hunter, who has failed to hunt down anything for a couple of days and is hungry. He has an urgent desire, the desire to eat, which he is not able to fulfill – his desire is frustrated by the world. Now imagine our contemporary bank clerk, who went to work forgetting his wallet at home and is hungry too. He too is not able to fulfill his urgent desire to eat because it is frustrated by the world. (...) From the viewpoint of the two individuals the situation is very similar. However, it differs in at least one crucial respect. While the hunter cannot eat because there is no food available to him anywhere near (at least as far as he can find out), the clerk can easily find tons of food - it is enough to visit a nearest supermarket. The reason why he cannot get the food is not that it would be physically impossible, but because taking food from store's shelves without paying is forbidden. This story reminds us that many of the barriers that constrain our lives and make us find our way merely within the space delimited by them are no longer barriers in the literal sense of the word - they are no longer produced by the conspiracy of the causal laws that form our physical niche. Rather they are produced by the conspiracy of attitudes of our fellow humans - they are deliberate rules, rather than inexorable natural laws. In this way evolution is canalized not by the environment relatively independent of it, but rather by the ploy of the organisms it itself has brought into being. I think that realizing the full import of this autocatalyctic situation may lead us, on the one hand, to the appreciation of certain philosophical doctrines, pervasive especially after Kant, regarding normativity as the hallmark of the human, while, on the other hand seeing how they get enlightened by scientific doctrines regarding the development of the human race its continuities/discontinuities with its animal cousins. (shrink)
In this paper I put forward a thesis regarding the anatomy of “cultural evolution”, in particular the way the “cultural” transmission of behavioral patterns came to piggyback, through us humans, on the transmission effected by genetic evolution. I claim that what grounds and supports this new kind of transmission is a complex behavioral “meta-pattern” that makes it possible to grasp a pattern as something that “ought to be”, i.e. that transforms the pattern into what we can call a rule. (Here (...) I draw especially on the philosophical insights of Wilfrid Sellars.) In this way I interlink empirical research done in evolution theory with some more speculative philosophical theories, thus shedding new light on the former and adding an empirical footing to the latter. (shrink)
There are various approaches to truth and knowledge (in fact, cataloguing them has become something of a philosophical industry of its own); and in many cases, their explanations are taken to underlie the explanation of other crucial concepts, like language, reason etc. Especially in recent years, some of the approaches have come to be based on reducing semantics to pragmatics. An outstanding example of such a pragmatist approach is that of Bob Brandom, who bases the explication of both truth and (...) knowledge on his consideration of normative pragmatics. A less explicitly pragmatist approach to truth and knowledge was offered by Donald Davidson (who is surely not a pragmatist in the narrow sense of the word, but may be thought about as one in the wider sense proposed by Brandom, 2002, in which pragmatism means starting from the practical rather than the theoretical). In this paper I would like to point out that the discrepancy between these two approaches may be smaller than it would prima facie seem. To show this, I first turn my attention briefly to the general problem of theoretically accounting for human minds. (shrink)
The paper addresses foundational questions concerning the dynamic semantics of natural language based on dynamic logic of the Groenendijko-Stokhofian kind. Discussing a series of model calculi of increasing complexity, it shows in detail how the usual semantics of dynamic logic can be seen as emerging from the account for certain inferential patterns of natural language, namely those governing anaphora. In this way, the current ‘dynamic turn’ of logic is argued to be reasonably seen not as the product of changing the (...) focus of logic from the relation of entailment to „a structure of human cognitive action“ (van Benthem), but rather as merely another step in our long-term effort to master more and more inferential patterns. (shrink)
Wilfrid Sellars's analysis of the concept of meaning led, in effect, to the conclusion that the meaning of an expression is its inferential role. This view is often challenged by the claim that inference is a matter of syntax and syntax can never yield us semantics. I argue that this challenge is based on the confusion of two senses of "syntax"; and I try to throw some new light on the concept of inferential role. My conclusion is that the Sellarsian (...) view that something can have meaning only if it is subject to inferences is viable, and that inferential role is a plausible explication of meaning. However I also argue, pace Sellars, that the inferential nature of meaning does not prevent us from engaging in the enterprise of Carnapian formal semantics. (shrink)
Formal semantics is an enterprise which accounts for meaning in formal, mathematical terms, in the expectation of providing a helpful explication1 of the concept of the meaning of specific word kinds (such as logical ones), or of words and expressions generally. Its roots go back to Frege, who proposed exempting concepts, meanings of predicative expressions, from the legislation of psychology and relocating them under that of mathematics. This started a spectacular enterprise, fostered at first within formal logic and later moving (...) into the realm of natural languages, and featuring a series of eminent scholars, from Tarski and Carnap to Montague and David Lewis. Partly independently of this, Frege set the agenda for a long-term discussion of the question of what a natural language is, his own contribution being that language should be seen not as a matter of subjective psychology, but rather as a reality objective in the sense in which mathematics is objective. His formal semantics, then, was just an expression of this conception of language. And many theoreticians now take it for granted that formal semantics is inseparably connected with a Platonist conception of language. Moreover, the more recent champions of formal semantics, Montague and David Lewis, took for granted that natural language is nothing else than a structure of the very kind envisaged by the theories of formal logicians. While Montague claims quite plainly that there is no substantial difference between formal and natural languages ("I reject the contention," he says, 1974, p. 188, "that an important theoretical difference exists between formal and natural languages"), Lewis states that it is fully correct to say that a linguistic community entertains a language in the form of a mathematical structure ("we can say", states Lewis, 1975, p.. (shrink)
Když jsem v roce 1992 začínal na filosofické fakultě UK přednášet teorii sémantiky, cítil jsem intenzivní potřebu poskytnout studentům nějaký učební text. O překotném vývoji tohoto interdisciplinárního oboru, který odstartovalo v sedmdesátých letech úspěšné “zkřížení logiky s lingvistikou” Richardem Montaguem a dalšími a který se nezpomalil dodnes, totiž v češtině neexistovaly prakticky žádné zprávy (s čestnou výjimkou přístupu tzv. transparentní intenzionální logiky, který byl dílem českého emigranta Pavla Tichého a o kterém u nás psal Pavel Materna). Přehledové publikace, jaké jsou (...) potřeba pro někoho, kdo se chce v dané problematice zorientovat, se ovšem v té době i po světě teprve začínaly objevovat: v roce 1990 vyšla kniha An Introduction to Semantics Gennara Chierchii a Sally McConell-Ginnetové a v roce 1991 vydala skupina holandských logiků pod kolektivním pseudonymem L.T.F. Gamut dvousvazkovou knihu Language and Meaning. Za této situace jsem se pokusil co nejrychleji sepsat učební text, který by moji studenti i jiní zájemci o sémantiku mohli použít; a protože o jeho vydání paradoxně projevila větší zájem filosofická fakulta Masarykovy univerzity v Brně (kde jsem já osobně nikdy neučil; ale zájem o sémantiku tam neúnavně podněcoval a vydání mého textu tam zprostředkoval kolega Materna) než filosofická fakulta mojí domovské Karlovy univerzity, vyšel tento text v Brně, a to pod názvem Úvod do teoretické sémantiky. (shrink)
In a memorable paper, Donald Davidson (1986, p. 446) insists that "there is no such thing as a language, not if a language is anything like what many philosophers and linguists have supposed". I have always taken this as an exaggeration, albeit an apt exaggeration that might be philosophically helpful. Now when it comes to predication, what I would have expected to hear from the same author would be along the lines of "there is no such thing as predication ... (...) ". But instead of this I hear something very different (Davidson, 2005, p. 77): [I]f we do not understand predication, we do not understand how any sentence works, nor can we account for the structure of the simplest thought that is expressible in language. At one time there was much discussion of what was called the "unity of proposition"; it is just this unity that a theory of predication must explain. The philosophy of language lacks its most important chapter without such a theory, the philosophy of mind is missing its crucial first step if it cannot describe the nature of judgment; and it is woeful if metaphysics cannot say how a substance is related to its attributes. I find myself at odds with just about everything written in this paragraph; and what is worse, my disagreement stems from a notion of language which I believe I have acquired also by reading Davidson. Reading this passage, I desperately sought for an indication that it was leading up to some catch, and not meant to be taken at face value. But, alas, I am afraid there is none. To avoid misunderstanding: I see nothing wrong in understanding predication as a clearly delimited linguistic phenomenon. We put together one kind of expression, which we have come to call the subject, with a different kind of expression, called the predicate, possibly.. (shrink)
The concept ofsemantic interpretation is a source of chronic confusion: the introduction of a notion ofinterpretation can be the result of several quite different kinds of considerations.Interpretation can be understood in at least three ways: as a process of dis-abstraction of formulas, as technical tool for the sake of characterizing truth, or as a reconstruction of meaning-assignment. However essentially different these motifs are and however properly they must be kept apart, these can all be brought to one and the same (...) notion of interpretation: to the notion of a compositional evaluation of expressions inducing a possible distribution of truth values among statements. (shrink)
Když Bůh stvořil Adama, pošeptal mu do ucha: Ve všech kontextech jednání budeš brát v potaz pravidla, byť by to mělo být jenom pravidlo, že se máš pídit po pravidlech, která bys mohl brát v potaz. Přestaneš-li brát v potaz pravidla, budeš chodit po čtyřech.
Is logic, feasibly, a product of natural selection? In this paper we treat this question as dependent upon the prior question of where logic is founded. After excluding other possibilities, we conclude that logic resides in our language, in the shape of inferential rules governing the logical vocabulary of the language. This means that knowledge of (the laws of) logic is inseparable from the possession of the logical constants they govern. In this sense, logic may be seen as a product (...) of natural selection: the emergence of logic requires the development of creatures who can wield structured languages of a specific complexity, and who are capable of putting the languages to use within specific discursive practices. (shrink)
In part one, I give an (unsystematic) overview of the development of logical tools which have been employed in the course of the analysis of referring expressions, i.e. definite and (specific) indefinite singular terms, of natural language. I present Russell's celebrated theory of definite descriptions which I see as an attempt to explain definite reference in terms of unique existence (and reference in general in terms of existence simpliciter); and I present Hilbert's E-calculus as an attempt to explain existence in (...) terms of choice. Then I turn to contemporary, dynamic approaches to the analysis of singular terms and point out that only within a dynamic framework can the Russellian and Hilbertian ideas yield a truly satisfactory analysis of singular terms, and consequently of reference and coreference. I call attention to the fact that current results of formal semantics demonstrate the advantages of viewing singular terms as denoting updates, i.e. as a means of changing the context (information state), and especially that part of the context which I call the individually. (shrink)
In this paper we first propose an exact definition of the concept of inferential role, and then go on to examine the question whether subscribing to inferentialism necessitates throwing away existing theories of formal semantics, as we know them from logic, or whether these could be somehow accomodated within the inferentialist framework. The conclusion we reach is that it is possible to make an inferentialist sense of even those common semantic theories which are usually considered as incompatible with inferentialism, such (...) as the standard semantics of second-order logic. (shrink)
Can we base the whole of logic solely on the concept of incompatibility? My motivation for asking this is two-fold: firstly, a technical interest in what a minimal foundations of logic might be; and secondly, the existence of philosophers who have taken incompatibility as the ultimate key to human reason (viz., e.g., Hegel's concept of determinate negation). The main aim of this contribution is to tackle two related questions: Is it possible to reduce the foundations of logic to the mere (...) concept of incompatibility? and Does this reduction lead us to a specific logical system? We conclude that the answers, respectively, are YES and a qualified NO (qualified in the sense that basing semantics on incompatibility does make some logical systems more natural than others, but without ruling out the alternatives.) A search for the bare bones of logic generally leads one to the relation of inference (or consequence). This way is explored meticulously by Koslow (1992). He defines an implication structure as, in effect, an ordered pair , where S is a set and | f Pow(S)HS fulfilling certain (relatively simple) restrictions. And obviously if we reduce incompatibility to inference, which is achievable by the well known ex contradictione quodlibet principle, we reach a logic based on incompatibility. The kind of logic flowing most straightforwardly from this setting is the intuitionist one. However, there is also the approach taken by R. Brandom and A. Aker (2008), who have set up a logic based directly on incompatibility. They define an incompatibility structure as an ordered pair such that S is a set and z f Pow(S) (again fulfilling certain restrictions). The authors introduce logical operators in such a way that they reach classical logic. Does this mean that inference 'naturally' leads to intuitionist logic, whereas incompatibility leads to the classical one? Myself, I have argued that it is indeed intuitionist logic that is the logic of inference (see Peregrin, 2008).. (shrink)