In his classic 1936 essay On the Concept of Logical Consequence, Alfred Tarski used the notion of satisfaction to give a semantic characterization of the logical properties. Tarski is generally credited with introducing the model-theoretic characterization of the logical properties familiar to us today. However, in his book, The Concept of Logical Consequence, Etchemendy argues that Tarski's account is inadequate for quite a number of reasons, and is actually incompatible with the standard model-theoretic account. Many of his criticisms are meant (...) to apply to the model-theoretic account as well.In this paper, I discuss the following four critical charges that Etchemendy makes against Tarski and his account of the logical properties:(1)(a) Tarski's account of logical consequence diverges from the standard model-theoretic account at points where the latter account gets it right. (b) Tarski's account cannot be brought into line with the model-theoretic account, because the two are fundamentally incompatible. (2) There are simple counterexamples (enumerated by Etchemendy) which show that Tarski's account is wrong. (3) Tarski committed a modal fallacy when arguing that his account captures our pre-theoretical concept of logical consequence, and so obscured an essential weakness of the account. (4) Tarski's account depends on there being a distinction between the logical terms and the non-logical terms of a language, but (according to Etchemendy) there are very simple (even first-order) languages for which no such distinction can be made. Etchemendy's critique raises historical and philosophical questions about important foundational work. However, Etchemendy is mistaken about each of these central criticisms. In the course of justifying that claim, I give a sustained explication and defense of Tarski's account. Moreover, since I will argue that Tarski's account and the model-theoretic account really do come to the same thing, my subsequent defense of Tarski's account against Etchemendy's other attacks doubles as a defense against criticisms that would apply equally to the familiar model-theoretic account of the logical properties. (shrink)
Alfred Tarski (1944) wrote that “the condition of the ‘essential richness’ of the metalanguage proves to be, not only necessary, but also sufficient for the construction of a satisfactory definition of truth.” But it has remained unclear what Tarski meant by an ‘essentially richer’ metalanguage. Moreover, DeVidi and Solomon (1999) have argued in this Journal that there is nothing that Tarski could have meant by that phrase which would make his pronouncement true. We develop an answer to the historical question (...) of what Tarski meant by ‘essentially richer’ and pinpoint the general result that stands behind his essential richness claim. In defense of Tarski, we then show that each of the several arguments of DeVidi and Solomon are either moot or mistaken. (shrink)
The problem with model-theoretic modal semantics is that it provides only the formal beginnings of an account of the semantics of modal languages. In the case of non-modal language, we bridge the gap between semantics and mere model theory, by claiming that a sentence is true just in case it is true in an intended model. Truth in a model is given by the model theory, and an intended model is a model which has as domain the actual objects of (...) discourse, and which relates these objects in an appropriate manner. However, the same strategy applied to the modal case seems to require an intended modal model whose domain includes mere possibilia.Building on recent work by Christopher Menzel (Nous 1990), I give an account of model-theoretic semantics for modal languages which does not require mere possibilia or intensional entities of any kind. Menzel has offered a representational account of model-theoretic modal semantics that accords with actualist scruples, since it does not require possibilia. However, Menzel's view is in the company of other actualists who seek to eliminate possible worlds, but whose accounts tolerate other sorts of abstract, intensional entities, such as possible states of affairs. Menzel's account crucially depends on the existence of properties and relations in intension. (shrink)
I offer an interpretation of a familiar, but poorly understood portion of Tarskis work on truth – bringing to light a number of unnoticed aspects of Tarskis work. A serious misreading of this part of Tarski to be found in Scott Soames Understanding Truth is treated in detail. Soamesreading vies with the textual evidence, and would make Tarskis position inconsistent in an unsubtle way. I show that Soames does not finally have a coherent interpretation of Tarski. This is unfortunate, since (...) Soames ultimately arrogates to himself a key position that he has denied to Tarski and which is rightfully Tarskis own. (shrink)
Alfred Tarski (1944) wrote that "the condition of the 'essential richness' of the metalanguage proves to be, not only necessary, but also sufficient for the construction of a satisfactory definition of truth." But it has remained unclear what Tarski meant by an 'essentially richer' metalanguage. Moreover, DeVidi & Solomon (1999) have argued that there is nothing that Tarski could have meant by that phrase which would make his pronouncement true.
Half a century after Michael Polanyi conceptualised ‘the tacit component’ in personal knowing, management studies has reinvented ‘tacit knowledge’—albeit in ways that squander the advantages of Polanyi’s insights and ignore his faith in ‘spiritual reality’. While tacit knowing challenged the absurdities of sheer objectivity, expressed in a ‘perfect language’, it fused rational knowing, based on personal experience, with mystical speculation about an un-experienced ‘external reality’. Faith alone saved Polanyi’s model from solipsism. But Ernst von Glasersfeld’s radical constructivism provides scope to (...) rethink personal tacit knowing with regard to ‘other people’ and the intersubjectively viable construction of ‘experiential reality’. By separating tacit knowing from Polanyi’s metaphysical realism and drawing on Benedict Anderson’s concept of ‘imagined communities’, it is possible to conceptualise ‘imagined institutions’ as the tacit dimension of power that shapes human interaction. Whereas Douglass North claimed institutions could be reduced to rules, imagined institutions are known in ways we cannot tell. (shrink)
According to Timothy Williamson's epistemic view, vague predicates have precise extensions, we just don't know where their boundaries lie. It is a central challenge to his view to explain why we would be so ignorant, if precise borderlines were really there. He offers a novel argument to show that our insuperable ignorance ``is just what independently justified epistemic principles would lead one to expect''. This paper carefully formulates and critically examines Williamson's argument. It is shown that the argument (...) does not explain our ignorance, and is not really apt for doing so. Williamson's unjustified commitment to a controversial and crucial assumption is noted. It is also argued in three different ways that his argument is, in any case, self-defeating – the same principles that drive the argument can be applied to undermine one of its premises. Along the way, Williamson's unstated commitment to a number of other controversial doctrines comes to light. (shrink)
According to Nancy Cartwright, a causal law holds just when a certain probabilistic condition obtains in all test situations which in turn satisfy a set of background conditions. These background conditions are shown to be inconsistent and, on separate account, logically incoherent. I offer a corrective reformulation which also incorporates a strategy for problems like Hesslow's thrombosis case. I also show that Cartwright's recent argument for modifying the condition to appeal to singular causes fails.Proposed modifications of the theory's probabilistic condition (...) to handle effects with extreme probabilities (0 or 1) are found unsatisfactory. I propose a unified solution which also handles extreme causes. Undefined conditional probabilities give rise to three good, but non-equivalent, ways of formulating the theory. Various formulations appear in the literature. I give arguments to eliminate all but one candidate. Finally, I argue for a crucial new condition clause, and show how to extend the results beyond a simple probabilistic framework. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
The paper develops a view of interpretative cultural practice as a complex system of dynamically changing constituents which stand in definite relations to one another. These constituents are the Object of Interpretation (O), Result of Interpretation or interpretation itself (I), the Process of interpretation (P) and the interpreting Subject (S). It is argued that if such a view as this is adapted, ‘singularism’ as a norm for cultural practices necessarily gives way to ‘multiplism’. Singularism and multiplism are terms used by (...) Michael Krausz in Rightness and Reasons (1993). Krausz also talks of certain interpretative practices as imputational, in the sense that the object of interpretation changes, is ‘imputed upon’ during the course of the practice. This paper contends that all cultural practices are imputalional, for each such practice leaves its effect on the object. Not only does practice affect the object, but it affects the subject too The evolution of the subject, the self, through imputational interpretative cultural practices is explored as a major element in the making of a human individual. (shrink)
This paper critiques a recent article in this journal in terms of its use of persuasive techniques. The central issue of the original article by Miles, Munilla and Covin and this paper is whether there should be a change in intellectual property rights to address the needs of impoverished people who are HIV positive or have full blown AIDS and the countries that do not have the means to buy AIDS medication in the absence of subsidies. (...) class='Hi'> This paper argues that patents are state sanctioned monopolies that worked effectively for nearly a century. However, new circumstances and a globally interdependent world represent a new environment calling for an adjustment in the conventional public policy premises underlying patents. Most of the meaning and complexity of this issue is lost to the persuasive techniques of the original article. (shrink)
In Logical consequence: A defense of Tarski (Journal of Philosophical Logic, vol. 25, 1996, pp. 617–677), Greg Ray defends Tarski"s account of logical consequence against the criticisms of John Etchemendy. While Ray"s defense of Tarski is largely successful, his attempt to give a general proof that Tarskian consequence preserves truth fails. Analysis of this failure shows that de facto truth preservation is a very weak criterion of adequacy for a theory of logical consequence and should be replaced by a stronger (...) absence-of-counterexamples criterion. It is argued that the latter criterion reflects the modal character of our intuitive concept of logical consequence, and it is shown that Tarskian consequence can be proved to satisfy this criterion for certain choices of logical constants. Finally, an apparent inconsistency in Ray"s interpretation of Tarski"s position on the modal status of the consequence relation is noted. (shrink)
Quentin Meillassoux: After finitude: an essay on the necessity of contingency, trans. Ray Brassier. London and New York: Continuum, 2008, 27.95 ( hb );19.95 (pb). Graham Harman, Quentin Meillassoux: Philosophy in the making, Edinburgh: Edinburgh University Press, 2011, viii and 247 pp. 110.00 ( hb );32.00 (pb). Content Type Journal Article Category Book Review Pages 1-5 DOI 10.1007/s11153-012-9341-x Authors Clayton Crockett, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA Journal International Journal for Philosophy of Religion Online ISSN (...) 1572-8684 Print ISSN 0020-7047. (shrink)
Are We Spiritual Machines? as well as Ray Kurzweil for his response to my essay in that book and his willingness to take part in this discussion. My essay in that book was titled "Kurzweil's Impoverished Spirituality" and was essentially a stripped down version of a piece I had done for..
The observed association between supernovae and gamma-ray bursts represents a cornerstone in our understanding of the nature of gamma-ray bursts. The collapsar model provides a theoretical framework for this connection. A key element is the launch of a bipolar jet (seen as a gamma-ray burst). The resulting hot cocoon disrupts the star, whereas the 56Ni produced gives rise to radioactive heating of the ejecta, seen as a supernova. In this discussion paper, I summarize the observational status of the supernova–gamma-ray burst (...) connection in the context of the ‘engine’ picture of jet-driven supernovae and highlight SN 2012bz/GRB 120422A—with its luminous supernova but intermediate high-energy luminosity—as a possible transition object between low-luminosity and jet gamma-ray bursts. The jet channel for supernova explosions may provide new insights into supernova explosions in general. (shrink)
The origin of gamma-ray bursts (GRBs) is one of the most interesting puzzles in recent astronomy. During the last decade a consensus has formed that long GRBs (LGRBs) arise from the collapse of massive stars, and that short GRBs (SGRBs) have a different origin, most likely neutron star mergers. A key ingredient of the collapsar model that explains how the collapse of massive stars produces a GRB is the emergence of a relativistic jet that penetrates the stellar envelope. The condition (...) that the emerging jet penetrates the envelope imposes strong constraints on the system. Using these constraints we show the following. (i) Low-luminosity GRBs (llGRBs), a subpopulation of GRBs with very low luminosities (and other peculiar properties: single-peaked, smooth and soft), cannot be formed by collapsars. llGRBs must have a different origin (most likely a shock breakout). (ii) On the other hand, regular LGRBs must be formed by collapsars. (iii) While for BATSE the dividing line between collapsars and non-collapsars is indeed at approximately 2 s, the dividing line is different for other GRB detectors. In particular, most Swift bursts longer than 0.8 s are of a collapsar origin. This last result requires a revision of many conclusions concerning the origin of Swift SGRBs, which were based on the commonly used 2 s limit. (shrink)
Complete samples are the basis of any population study. To this end, we selected a complete subsample of Swift long bright gamma ray bursts (GRBs). The sample, made up of 58 bursts, was selected by considering bursts with favourable observing conditions for ground-based follow-up observations and with the 15–150 keV 1 s peak flux above a flux threshold of 2.6 photons cm−2 s−1. This sample has a redshift completeness level higher than 90 per cent. Using this complete sample, we investigate (...) the properties of long GRBs and their evolution with cosmic time, focusing in particular on the GRB luminosity function, the prompt emission spectral-energy correlations and the nature of dark bursts. (shrink)
Bob B. He: Two-dimensional X-ray diffraction Content Type Journal Article Category Book Review Pages 1-2 DOI 10.1007/s10698-011-9135-8 Authors George B. Kauffman, Department of Chemistry, California State University, Fresno, Fresno, CA 93740-8034, USA Journal Foundations of Chemistry Online ISSN 1572-8463 Print ISSN 1386-4238.
We consider the implications of a model for long-duration gamma-ray bursts in which the progenitor is spun up in a close binary by tidal interactions with a massive black-hole companion. We investigate a sample of such binaries produced by a binary population synthesis, and show that the model predicts several common features in the accretion on to the newly formed black hole. In all cases, the accretion rate declines as approximately t−5/3 until a break at a time of order 104 (...) s. The accretion rate declines steeply thereafter. Subsequently, there is flaring activity, with the flare peaking between 104 and 105 s, the peak time being correlated with the flare energy. We show that these times are set by the semi-major axis of the binary, and hence the process of tidal spin-up; furthermore, they are consistent with flares seen in the X-ray light curves of some long gamma-ray bursts. (shrink)
In our quest for gamma-ray burst (GRB) progenitors, it is relevant to consider the progenitor evolution of normal supernovae (SNe). This is largely dominated by mass loss. We discuss the mass-loss rate for very massive stars up to 300M⊙. These objects are in close proximity to the Eddington Γ limit. We describe the new concept of the transitional mass-loss rate, enabling us to calibrate wind mass loss. This allows us to consider the occurrence of pair-instability SNe in the local Universe. (...) We also discuss luminous blue variables and their link to luminous SNe. Finally, we address the polarization properties of Wolf–Rayet (WR) stars, measuring their wind asphericities. We argue to have found a group of rotating WR stars that fulfil the required criteria to make long-duration GRBs. (shrink)
I reappraise in detail Hertz's cathode ray experiments. I show that, contrary to Buchwald's (1995) evaluation, the core experiment establishing the electrostatic properties of the rays was successfully replicated by Perrin (probably) and Thomson (certainly). Buchwald's discussion of 'current purification' is shown to be a red herring. My investigation of the origin of Buchwald's misinterpretation of this episode reveals that he was led astray by a focus on what Hertz 'could do'-his experimental resources. I argue that one (...) should focus instead on what Hertz wanted to achieve-his experimental goals. Focusing on these goals, I find that his explicit and implicit requirements for a successful investigation of the rays' properties are met by Perrin and Thomson. Thus, even by Hertz's standards, they did indeed replicate his experiment. (shrink)
A 10 kHz pulsed X-ray generator utilising a hot-cathode triode in conjunction with a new type of grid control device for controlling X-ray duration is described. The energy-storage condenser was charged up to 70 kV by a power supply, and the electric charges in the condenser were discharged to the X-ray tube repetitively by the grid control device. The maximum values of the grid voltage (negative value), the tube voltage, and the tube current were (...) −1.5 kV, 70 kV, and 0.4 A, respectively. The duration of the flash X-ray pulse was primarily determined by the time constant of the grid control device and the cut-off voltage of thermoelectrons. The X-ray duration was controlled within a region of less than 1 ms; the X-ray intensity with a pulse width of 0.27 ms, a charged voltage of 70 kV, and a peak tube current of 0.4 A was 0.92 μC kg −1 at 0.5 m per pulse. The maximum repetition rate was about 10 kHz, and the size of the focal spot was about 3.5×3.5 mm. (shrink)