This paper attempts to construct a systematic and plausible account of the necessity of the past. The account proposed is meant to explicate the central ockhamistic thesis of the primacy of the pure present and to vindicate Ockham's own non-Aristotelian response to the challenge of logicaldeterminism.
Bobzien presents the definitive study of one of the most interesting intellectual legacies of the ancient Greeks: the Stoic theory of causal determinism. She explains what it was, how the Stoics justified it, and how it relates to their views on possibility, action, freedom, moral responsibility, moral character, fatalism, logicaldeterminism and many other topics. She demonstrates the considerable philosophical richness and power that these ideas retain today.
ABSTRACT: This is a little piece directed at the newcomer to Aristotle, making some general remarks about reading Aristotle at the beginning and end, with sandwiched in between, a brief and much simplified discussion of some common misunderstandings of Aristotle's philosophy, concerning spontaneity, causal indeterminism, freedom-to-do-otherwise, free choice, agent causation, logicaldeterminism, teleological determinism, artistic creativity and freedom (eleutheria).
It is well known that every propositional logic which satisﬁes certain very natural conditions can be characterized semantically using a multi-valued matrix ([Los and Suszko, 1958; W´ ojcicki, 1988; Urquhart, 2001]). However, there are many important decidable logics whose characteristic matrices necessarily consist of an inﬁnite number of truth values. In such a case it might be quite diﬃcult to ﬁnd any of these matrices, or to use one when it is found. Even in case a logic does have a (...) ﬁnite characteristic matrix it might be diﬃcult to discover this fact, or to ﬁnd such a matrix. The deep reason for these diﬃculties is that in an ordinary multi-valued semantics the rules and axioms of a system should be considered as a whole, and there is no method for separately determining the semantic eﬀects of each rule alone. (shrink)
Suppose it were known, by someone else, what you are going to choose to do tomorrow. Wouldn't that entail that tomorrow you must do what it was known in advance that you would do? In spite of your deliberating and planning, in the end, all is futile: you must choose exactly as it was earlier known that you would. The supposed exercise of your free will is ultimately an illusion. Historically, the tension between foreknowledge and the exercise of free will (...) was addressed in a religious context. According to orthodox views in the West, God was claimed to be omniscient (and hence in possession of perfect foreknowledge) and yet God was supposed to have given humankind free will. Attempts to solve the apparent contradiction often involved attributing to God special properties, e.g. being 'outside' of time. However, the trouble with such solutions is that they are generally unsatisfactory on their own terms. Even more serious is the fact that they leave untouched the problem posed not by God's foreknowledge but that of any human being. Do human beings have foreknowledge? Certainly, of at least some events and behaviors. Thus we have a secular counterpart of the original problem. A human being's foreknowledge, exactly as would God's, of another's choices would seem to preclude the exercise of human free will. Various ways of trying to solve the problem – e.g. by putting constraints on the truth-conditions for statements, or by 'tightening' the conditions necessary for knowledge – are examined and shown not to work. Ultimately the alleged incompatibility of foreknowledge and free will is shown to rest on a subtle logical error. When the error, a modal fallacy, is recognized, and remedied, the problem evaporates. (shrink)
The Lazy Argument, as it is preserved in historical testimonies, is not logically conclusive. In this form, it appears to have been proposed in favor of part-time fatalism (including past time fatalism). The argument assumes that free will assumption is unacceptable from the standpoint of the logical fatalist but plausible for some of the nonuniversal or part-time fatalists. There are indications that the layout of argument is not genuine, but taken over from a Megarian source and later transformed. The (...) genuine form of the argument seems to be given in different form and far closer to Megarian logical fatalism and its purpose is not to defend laziness. If the historical argument has to lead to a logically satisfactory solution, some additional assumptions and additional tuning is needed. (shrink)
I have three main objectives in this essay. First, in chapter 2, I shall put forward and justify what I call worldlessness, by which I mean the following: All truths (as well as falsehoods) are wholly independent of any circumstances, not only time and place but also possible worlds. It follows from this view that whatever is actually true must be taken as true with respect to every possible world, which means that all truths are (in a sense) necessary. However, (...) the account I shall propound is different from what is known in the trade as necessitarianism, i.e. the view that there is only one possible world, viz. the actual one, for the doctrine of the worldlessness of truth values, despite its commitment to the necessity of truths and falsehoods, is quite compatible with the idea of there being other possible worlds. Another important issue in chapter 2, explored in particular in section 2.12, is the claim that there is no real change in the world. Secondly, in chapter 3 I consider the eminent traditional argument for determinism, deriving from Aristotle, namely, logicaldeterminism, i.e. determinism justified by an appeal to the logical principle of bivalence (that all proper statements, including those concerning the future, are either true or false). In this connection I try to show that, (i), the formulation of the conclusion of this argument as "Whatever will happen will happen of necessity" is implausible, at least from the modern point of view, (ii), the formulation as "Whatever will happen will happen inevitably" is more to the point, and (iii), on the basis of the worldless and timeless aspect advocated in chapter 2, this latter formulation is quite harmless, essentially amounting to the trivial statement, "Whatever will happen will happen". Thirdly, in chapter 4 I study theological determinism, or determinism that arises from God's supposed providential control over everything that happens. In this connection, I shall survey some historical accounts of the relation between human free will and determinism (not only theological but also causal determinism); the philosophers the views of whom I shall attend to include Chrysippus, St. Augustine, Boethius and Aquinas. I shall in particular consider G.W. Leibniz' theodicean aspirations, viz. his solution to the problem of evil and, especially, his compatibilist attempts to reconcile human free will with the strictly deterministic flow of actual events. I think it is important to try to explicate Leibniz' ingenious account of these matters, since it seems that it has not been fully appreciated in the literature, not even by contemporary Leibniz scholars (such as B. Mates, R.C. Sleigh, C. Wilson, R.M. Adams and D. Rutherford). In providing the Leibnizian compatibilist solution of the problem of determinism and freedom in chapter 4, I shall utilize the approach of chapter 2. (shrink)
A modal logic for translating a sequence of English sentences to a sequence of logical forms is presented, characterized by Kripke models with points formed from input/output sequences, and valuations determined by entailment relations. Previous approaches based (to one degree or another) on Quantified Dynamic Logic are embeddable within it. Applications to presupposition and ambiguity are described, and decision procedures and axiomatizations supplied.
Bertrand Russell, in the second of his 1914 Lowell lectures, Our Knowledge of the External World, asserted famously that ‘every philosophical problem, when it is subjected to the necessary analysis and purification, is found either to be not really philosophical at all, or else to be, in the sense in which we are using the word, logical’ (Russell 1993, p. 42). He went on to characterize that portion of logic that concerned the study of forms of propositions, or, as (...) he called them, ‘logical forms’. This portion of logic he called ‘philosophical logic’. Russell asserted that ... some kind of knowledge of logical forms, though with most people it is not explicit, is involved in all understanding of discourse. It is the business of philosophical logic to extract this knowledge from its concrete integuments, and to render it explicit and pure. (p. 53) Perhaps no one still endorses quite this grand a view of the role of logic and the investigation of logical form in philosophy. But talk of logical form retains a central role in analytic philosophy. Given its widespread use in philosophy and linguistics, it is rather surprising that the concept of logical form has not received more attention by philosophers than it has. The concern of this paper is to say something about what talk of logical form comes to, in a tradition that stretches back to (and arguably beyond) Russell’s use of that expression. This will not be exactly Russell’s conception. For we do not endorse Russell’s view that propositions are the bearers of logical form, or that appeal to propositions adds anything to our understanding of what talk of logical form comes to. But we will be concerned to provide an account responsive to the interests expressed by Russell in the above quotations, though one clarified of extraneous elements, and expressed precisely. For this purpose, it is important to note that the concern expressed by Russell in the above passages, as the surrounding text makes clear, is a concern not just with logic conceived narrowly as the study of logical terms, but with propositional form more generally, which includes, e.g., such features as those that correspond to the number of argument places in a propositional function, and the categories of objects which propositional.... (shrink)
After a brief but necessary characterization of the notion of determinism, I discuss and critically evaluate four views on the relationship between determinism and free will by taking into account both (i) what matters most to us in terms of a free will worth-wanting and (ii) which capacities can be legitimately attributed to human beings without contradicting what we currently know from natural sciences. The main point of the paper is to argue that the libertarian faces a dilemma: (...) on the one hand, the possibility of ?doing otherwise? ? a necessary condition of a free will according to the libertarian ? requires indeterminism or chance, but any kind of indeterminism has the undesirable consequence of separating our actions from our character and our past. On the other hand, if our character has to be fully expressed by our actions, determinism becomes necessary and we seem to be metaphysically unfree. I conclude by showing that the dispute between compatibilists and libertarians possesses an important but hitherto very neglected pragmatic component as well, dependent on two different ethical attitudes toward a meaningful life. (shrink)
The starting point of this paper concerns the apparent difference between what we might call absolute truth and truth in a model, following Donald Davidson. The notion of absolute truth is the one familiar from Tarski’s T-schema: ‘Snow is white’ is true if and only if snow is white. Instead of being a property of sentences as absolute truth appears to be, truth in a model, that is relative truth, is evaluated in terms of the relation between sentences and models. (...) -/- I wish to examine the apparent dual nature of logical truth (without dwelling on Davidson), and suggest that we are dealing with a distinction between a metaphysical and a linguistic interpretation of truth. I take my cue from John Etchemendy, who suggests that absolute truth could be considered as being equivalent to truth in the ‘right model’, i.e., the model that corresponds with the world. However, the notion of ‘model’ is not entirely appropriate here as it is closely associated with relative truth. Instead, I propose that the metaphysical interpretation of truth may be illustrated in modal terms, by metaphysical modality in particular. One of the tasks that I will undertake in this paper is to develop this modal interpretation, partly building on my previous work on the metaphysical interpretation of the law of non-contradiction (Tahko 2009). -/- After an explication of the metaphysical interpretation of logical truth, a brief study of how this interpretation connects with some recent important themes in philosophical logic follows. In particular, I discuss logical pluralism and propose an understanding of pluralism from the point of view of the metaphysical interpretation. (shrink)
It is widely held that the current debate on the mind-body problem in analytic philosophy began during the 1950s at two distinct sources: one in America, de- riving from Herbert Feigl's writings, and the other in Australia, related to writings by U. T. Place and J. J. C. Smart (Feigl  1967). Jaegwon Kim recently wrote that "it was the papers by Smart and Feigl that introduced the mind-body problem as a mainstream metaphysical Problematik of analytical philosophy, and launched the (...) debate that has continued to this day" (Kim 1998, 1). Nonetheless, it is not at all obvious why these particular articles sparked a debate, nor why Feigl's work in particular came to play such a prominent part in it, nor how and to what extent Feigl's approach rests on the logical empiricism he endorsed. (shrink)
Here I motivate and defend a new counterexample to logical (or non-causal) versions of the direct argument for responsibility-determinism incompatibilism. Such versions purport to establish incompatibilism via an inference principle to the effect that non-responsibility transfers along relations of logical consequence, including those that hold between earlier and later states of a deterministic world. Unlike previous counterexamples, this case doesn't depend on preemptive overdetermination; nor can it be blocked with a simple modification of the inference principle. In (...) defending this counterexample, I show that van Inwagen's technical notion of being partly responsible for a state of affairs, which figures in his statement of the principle, is problematic. (shrink)
According to the Consequence Argument, the truth of determinism plus other plausible principles would yield the conclusion that we have no free will. In this paper I will argue that the conception of determinism typically employed in the various versions of the Consequence Argument is not plausible. In particular, I will argue that, taken most straightforwardly, determinism as defined in the Consequence Argument would imply that the existence of God is logically impossible. This is quite an implausible (...) result. The truth or falsity of determinism is typically taken to be a contingent, empirical matter. But how could the empirical discovery that determinism is true lead to the conclusion that God’s existence is a logical impossibility? The defender of the Consequence Argument can avoid this conclusion, but only at the cost of making other similarly implausible claims. The objection.. (shrink)
Kant, in various parts of his treatment of causality, refers to determinism or the principle of sufficient reason as an inescapable principle. In fact, in the Second Analogy we find the elements to reconstruct a purely phenomenal determinism as a logical and tautological truth. I endeavour in this article to gather these elements into an organic theory of phenomenal causality and then show, in the third section, with a specific argument which I call the “paradox of phenomenal (...) observation”, that this phenomenal determinism is the only rational approach to causality because any logico-reductivistic approach, such as the Humean one, would destroy the temporal order and so the very possibility to talk of a causal relation. I also believe that, all things said, Kant did not achieve a much greater comprehension of the problem than Hume did, in his theory of causality, for he did not free a phenomenal approach from the impasse of reductivism as his reflections on “simultaneous causation” and “vanishing quantities” indeed show, and this I will argue in Sect. 4 of this article. (shrink)
The abstract noun "Determinism" functions like a family name for a group of philosophical doctrines each of which asserts that, in some sense or other, events occur of necessity when and as they do. Different members of the family stake out different doctrinal territories, some construing the necessity involved in purely logical terms, some in causal terms, and still others in terms of predictability. Each has to do with necessary connections between past, present and future.
Auguste Comte's doctrine of the three phases through which sciences pass (the theological, the metaphysical, and the positive) allows us to explain what John Stuart Mill was attempting in his magnum opus, the System of Logic: namely, to move the science of logic to its terminal and 'positive' stage. Both Mill's startling account of deduction and his unremarked solution to the Humean problem of induction eliminate the notions of necessity or force—in this case, the 'logical must'—characteristic of a science's (...) metaphysical stage. Mill's treatment had a further surprising payoff: his solution to the Problem of Necessity (what today we call the problem of determinism and freedom of the will). (shrink)
The paper applies the theory presented in A Formal Ontology of Situations (this journal, vol. 41 (1982), no. 4) to obtain a typology of metaphysical systems by interpreting them as different ontologies of situations. Four are treated in some detail: Hume's diachronic atomism, Laplacean determinism, Hume's synchronic atomism, and Wittgenstein's logical atomism. Moreover, the relation of that theory to the situation semantics of Perry and Barwise is discussed.
The present paper investigates why Logical Empiricists remained silent about one of the most philosophy-laden matters of theoretical physics of the day, the Principle of Least Action (PLA). In the two decades around 1900, the PLA enjoyed a remarkable renaissance as a formal unification of mechanics, electrodynamics, thermodynamics, and relativity theory. Taking Ernst Mach's historico-critical stance, it could be liberated from much of its physico-theological dross. Variational calculus, the mathematical discipline on which the PLA was based, obtained a new (...) rigorous basis. These three developments prompted Max Planck to consider the PLA as formal embodiment of his convergent realist methodology. Typically rejecting ontological reductionism, David Hilbert took the PLA as the key concept in his axiomatizations of physical theories. It served one of the main goals of the axiomatic method: 'deepening the foundations'. Although Moritz Schlick was a student of Planck's, and Hans Hahn and Philipp Frank enjoyed close ties to Göttingen, the PLA became a veritable Shibboleth to them. Rather than being worried by its historical connections with teleology and determinism, they erroneously identified Hilbert's axiomatic method tout court with Planck's metaphysical realism. Logical Empiricists' strict containment policy against metaphysics required so strict a separation between physics and mathematics to exclude even those features of the PLA and the axiomatic method not tainted with metaphysics. (shrink)
The present paper investigates why logical empiricists remained silent about one of the most philosophy-laden matters of theoretical physics of their day, the principle of least action (PLA). In the two decades around 1900, the PLA enjoyed a remarkable renaissance as a formal unification of mechanics, electrodynamics, thermodynamics, and relativity theory. Taking Ernst Mach's historico-critical stance, it could be liberated from much of its physico-theological dross. Variational calculus, the mathematical discipline on which the PLA was based, obtained a new (...) rigorous basis. These three developments prompted Max Planck to consider the PLA as formal embodiment of his convergent realist methodology. Typically rejecting ontological reductionism, David Hilbert took the PLA as the key concept in his axiomatizations of physical theories. It served one of the main goals of the axiomatic method: ''deepening the foundations.'' Although Moritz Schlick was a student of Planck's, and Hans Hahn and Philipp Frank enjoyed close ties to Gottingen, the PLA became a veritable Shibboleth to them. Rather than being worried by its historical connections with teleology and determinism, they erroneously identified Hilbert's axiomatic method tout court with Planck's metaphysical realism. Logical empiricists' strict containment policy against metaphysics required so strict a separation between physics and mathematics to exclude even those features of the PLA and the axiomatic method not tainted with metaphysics. (shrink)
Taking 'rationalized judgments' to be those formed by inference from other judgments, I argue against 'Extreme Determinism': the thesis that theoretical rationalization just is a kind of predetermination of 'conclusion-judgments' by 'premise-judgments'. The argument rests upon two key lemmas: firstly, that a deliberator - in this case, his/her assent to some proposition - to be predetermined (I call this the 'Openness Requirement'): secondly, that a subject's logical insight into his/her premise-judgments must enter into the explanation of any judgment (...) s/he forms that is rationalized by those judgments. My contention is that, given the Openness Requirement, no version of Extreme Determinsim can allow for the role played by logical insight in the rationalization of judgment. I end by indicating briefly how this result might figure in a wider argument against any form of determinism about rationalized judgment, and by explaining why I have focused specifically upon rebutting a deterministic view of theoretical as opposed to 'practical' rationalization. (shrink)
I discuss three ways of responding to the logical omniscience problems faced by traditional ‘possible worlds’ epistemic logics. Two of these responses were put forward by Hintikka and the third by Cresswell; all three have been influential in the literature on epistemic logic. I show that both of Hintikka's responses fail and present some problems for Cresswell’s. Although Cresswell's approach can be amended to avoid certain unpalatable consequences, the resulting formal framework collapses to a sentential model of knowledge, which (...) defenders of the ‘possible worlds’ approach are frequently critical of. (shrink)
In his book from 1984 Horst Wessel presents the system of strict logical consequence Fs (see also (Wessel, 1979)). The author maintained that this system axiomatized the relation |=s of strict logical consequence between formulas of Classical Propositional Calculi (CPC). Let |= be the classical consequence relation in CPC. The relation |=s is defined as follows: phi |=s psi iff phi |= psi, every variable from psi occurs in phi and neither phi is a contradiction nor psi is (...) a tautology. Clearly, if phi |=s psi, then neither phi is a tautology nor psi is a contradiction. Intuitions connected with the relation |=s were presented in (Wessel, 1984). The analysis of the relation |=s is also carried out in (Pietruszczak, 2004). In the present paper we will show that the system Fs is not a complete axiomatization of the relation |=s. Moreover, we will present the system VF s that is an «extension to completeness» of the Fs. (shrink)
Information is contained in statements and «flows» from their structure and meaning of expressions they contain. The information that flows only from the meaning of logical constants and logical structure of statements we will call logical information. In this paper we present a formal explication of this notion which is proper for sentences being Boolean combination of atomic sentences. 1 Therefore we limit ourselves to analyzing logical information flowing only from the meaning of truth-value connectives and (...)logical structure of sentences connected with these connectives. (shrink)
In our paper “A general characterization of the variable-sharing property by means of logical matrices”, a general class of so-called “Relevant logical matrices”, RMLs, is defined. The aim of this paper is to define a class of simpler Relevant logical matrices RMLs′serving the same purpose that RMLs, to wit: any logic verified by an RML′has the variable-sharing property and related properties predicable of the logic of entailment E and of the logic of relevance R.
Many philosophers claim that understanding a logical constant (e.g. ‘if, then’) fundamentally consists in having dispositions to infer according to the logical rules (e.g. Modus Ponens) that fix its meaning. This paper argues that such dispositionalist accounts give us the wrong picture of what understanding a logical constant consists in. The objection here is that they give an account of understanding a logical constant which is inconsistent with what seem to be adequate manifestations of such understanding. (...) I then outline an alternative account according to which understanding a logical constant is not to be understood dispositionally, but propositionally. I argue that this account is not inconsistent with intuitively correct manifestations of understanding the logical constants. (shrink)
Up to now theories of semantic information have implicitly relied on logical monism, or the view that there is one true logic. The latter position has been explicitly challenged by logical pluralists. Adopting an unbiased attitude in the philosophy of information, we take a suggestion from Beall and Restall at heart and exploit logical pluralism to recognise another kind of pluralism. The latter is called informational pluralism, a thesis whose implications for a theory of semantic information we (...) explore. (shrink)
In "Logical consequence: A defense of Tarski" (Journal of Philosophical Logic, vol. 25, 1996, pp. 617-677), Greg Ray defends Tarski's account of logical consequence against the criticisms of John Etchemendy. While Ray's defense of Tarski is largely successful, his attempt to give a general proof that Tarskian consequence preserves truth fails. Analysis of this failure shows that de facto truth preservation is a very weak criterion of adequacy for a theory of logical consequence and should be replaced (...) by a stronger absence-of-counterexamples criterion. It is argued that the latter criterion reflects the modal character of our intuitive concept of logical consequence, and it is shown that Tarskian consequence can be proved to satisfy this criterion for certain choices of logical constants. Finally, an apparent inconsistency in Ray's interpretation of Tarski's position on the modal status of the consequence relation is noted. (shrink)
In this paper, the authors discuss Frege's theory of "logical objects" (extensions, numbers, truth-values) and the recent attempts to rehabilitate it. We show that the 'eta' relation George Boolos deployed on Frege's behalf is similar, if not identical, to the encoding mode of predication that underlies the theory of abstract objects. Whereas Boolos accepted unrestricted Comprehension for Properties and used the 'eta' relation to assert the existence of logical objects under certain highly restricted conditions, the theory of abstract (...) objects uses unrestricted Comprehension for Logical Objects and banishes encoding (eta) formulas from Comprehension for Properties. The relative mathematical and philosophical strengths of the two theories are discussed. Along the way, new results in the theory of abstract objects are described, involving: (a) the theory of extensions, (b) the theory of directions and shapes, and (c) the theory of truth values. (shrink)
Shapiro (Philos Q 61:320–342, 2011) argues that, if we are deflationists about truth, we should be deflationists about logical consequence. Like the truth predicate, he claims, the logical consequence predicate is merely a device of generalisation and more substantial characterisation, e.g. proof- or model-theoretic, is mistaken. I reject his analogy between truth and logical consequence and argue that, by appreciating how the logical consequence predicate is used as well as the goals of proof theory and model (...) theory, we can be deflationists about truth but not logical consequence. (shrink)
Logic is formal in the sense that all arguments of the same form as logically valid arguments are also logically valid and hence truth-preserving. However, it is not known whether all arguments that are valid in the usual model-theoretic sense are truthpreserving. Tarski claimed that it could be proved that all arguments that are valid (in the sense of validity he contemplated in his 1936 paper on logical consequence) are truthpreserving. But he did not offer the proof. The question (...) arises whether the usual modeltheoretic sense of validity and Tarski's 1936 sense are the same. I argue in this paper that they probably are not, and that the proof Tarski had in mind, although unusable to prove that model-theoretically valid arguments are truth-preserving, can be used to prove that arguments valid in Tarski's 1936 sense are truth-preserving. (shrink)
This paper deals with the adequacy of the model-theoretic definition of logical consequence. Logical consequence is commonly described as a necessary relation that can be determined by the form of the sentences involved. In this paper, necessity is assumed to be a metaphysical notion, and formality is viewed as a means to avoid dealing with complex metaphysical questions in logical investigations. Logical terms are an essential part of the form of sentences and thus have a crucial (...) role in determining logical consequence. Gila Sher and Stewart Shapiro each propose a formal criterion for logical terms within a model-theoretic framework, based on the idea of invariance under isomorphism. The two criteria are formally equivalent, and thus we have a common ground for evaluating and comparing Sher and Shapiro philosophical justification of their criteria. It is argued that Shapiro's blended approach, by which models represent possible worlds under interpretations of the language, is preferable to Sher’s formal-structural view, according to which models represent formal structures. The advantages and disadvantages of both views’ reliance on isomorphism are discussed. (shrink)
This paper contains five observations concerning the intended meaning of the intuitionistic logical constants: (1) if the explanations of this meaning are to be based on a non-decidable concept, that concept should not be that of 'proof'; (2) Kreisel's explanations using extra clauses can be significantly simplified; (3) the impredicativity of the definition of → can be easily and safely ameliorated; (4) the definition of → in terms of 'proofs from premises' results in a loss of the inductive character (...) of the definitions of ∨ and ∃; and (5) the same occurs with the definition of ∀ in terms of 'proofs with free variables'. (shrink)
We continue the work initiated in Herzig and Lorini (J Logic Lang Inform, in press) whose aim is to provide a minimalistic logical framework combining the expressiveness of dynamic logic in which actions are first-class citizens in the object language, with the expressiveness of logics of agency such as STIT and logics of group capabilities such as CL and ATL. We present a logic called ( Deterministic Dynamic logic of Agency ) which supports reasoning about actions and joint actions (...) of agents and coalitions, and agentive and coalitional capabilities. In it is supposed that, once all agents have selected a joint action, the effect of this joint action is deterministic. In order to assess we prove that it embeds Coalition Logic. We then extend with modal operators for agents’ preferences, and show that the resulting logic is sufficiently expressive to capture the game-theoretic concepts of best response and Nash equilibrium. (shrink)
Jerónimo Pardo's analysis of the problems raised by some popular trinitarian paralogisms is studied in this paper. The purpose is to show how the notions employed by the theologians in order to solve theological problems were introduced into a textbook on logic to deal with some genuinely logical problems. First, the problem, common to all logical approaches, of achieving a fine-grained analysis of the logical form of syllogistical inferences. Second, the problem, typical of the terminist approach to (...) logic, of guaranteeing that Latin is an adequate vehicle for logical analysis. (shrink)
We present a framework that provides a logic for science by generalizing the notion of logical (Tarskian) consequence. This framework will introduce hierarchies of logical consequences, the first level of each of which is identified with deduction. We argue for identification of the second level of the hierarchies with inductive inference. The notion of induction presented here has some resonance with Popper's notion of scientific discovery by refutation. Our framework rests on the assumption of a restricted class of (...) structures in contrast to the permissibility of classical first-order logic. We make a distinction between deductive and inductive inference via the notions of compactness and weak compactness. Connections with the arithmetical hierarchy and formal learning theory are explored. For the latter, we argue against the identification of inductive inference with the notion of learnable in the limit. Several results highlighting desirable properties of these hierarchies of generalized logical consequence are also presented. (shrink)