The purpose of this paper is to explore three alternative frameworks for understanding the nature of language and mentality, which accent syntactical, semantical, and pragmatical aspects of the phenomena with which they are concerned, respectively. Although the computational conception currently exerts considerable appeal, its defensibility appears to hinge upon an extremely implausible theory of the relation of form to content. Similarly, while the representational approach has much to recommend it, its range is essentially restricted to those units of language that (...) can be understood in terms of undefined units. Thus, the only alternative among these three that can account for the meaning of primitive units of language is one emphasizing the basic role of skills, habits, and tendencies in relating signs and dispositions. (shrink)
Luciano Floridi (2003) offers a theory of information as a strongly semantic notion, according to which information encapsulates truth, thereby making truth a necessary condition for a sentence to qualify as information. While Floridi provides an impressive development of this position, the aspects of his approach of greatest philosophical significance are its foundations rather than its formalization. He rejects the conception of information as meaningful data, which entails at least three theses – that information can be false; that tautologies are (...) information; and, that It is true that ... is non-redundant – appear to be defensible. This inquiry offers various logical, epistemic, and ordinary-language grounds to demonstrate that an account of his kind is too narrow to be true and that its adoption would hopelessly obscure crucial differences between information, misinformation, and disinformation. (shrink)
The social exchange theory of reasoning, which is championed by Leda Cosmides and John Tooby, falls under the general rubric evolutionary psychology and asserts that human reasoning is governed by content-dependent, domain-specific, evolutionarily-derived algorithms. According to Cosmides and Tooby, the presumptive existence of what they call cheater-detection algorithms disconfirms the claim that we reason via general-purpose mechanisms or via inductively acquired principles. We contend that the Cosmides/Tooby arguments in favor of domain-specific algorithms or evolutionarily-derived mechanisms fail and that the notion (...) of a social exchange rule, which is central to their theory, is not correctly characterized. As a consequence, whether or not their conclusion is true cannot be established on the basis of the arguments they have presented. (shrink)
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.
The distinction between misinformation and disinformation becomes especially important in political, editorial, and advertising contexts, where sources may make deliberate efforts to mislead, deceive, or confuse an audience in order to promote their personal, religious, or ideological objectives. The difference consists in having an agenda. It thus bears comparison with lying, because lies are assertions that are false, that are known to be false, and that are asserted with the intention to mislead, deceive, or confuse. One context in which disinformation (...) abounds is the study of the death of JFK, which I know from more than a decade of personal research experience. Here I reflect on that experience and advance a preliminary theory of disinformation that is intended to stimulate thinking on this increasingly important subject. Five kinds of disinformation are distinguished and exemplified by real life cases I have encountered. It follows that the story you are about to read is true. (shrink)
Cognitive science has been dominated by the computational conception that cognition is computation across representations. To the extent to which cognition as computation across representations is supposed to be a purposive, meaningful, algorithmic, problem-solving activity, however, computers appear to be incapable of cognition. They are devices that can facilitate computations on the basis of semantic grounding relations as special kinds of signs. Even their algorithmic, problem-solving character arises from their interpretation by human users. Strictly speaking, computers as such — apart (...) from human users — are not only incapable of cognition, but even incapable of computation, properly construed. If we want to understand the nature of thought, then we have to study thinking, not computing, because they are not the same thing. (shrink)
Taking Brian Cantwell Smith’s study, “Limits of Correctness in Computers,” as its point of departure, this article explores the role of models in computer science. Smith identifies two kinds of models that play an important role, where specifications are models of problems and programs are models of possible solutions. Both presuppose the existence of conceptualizations as ways of conceiving the world “in certain delimited ways.” But high-level programming languages also function as models of virtual (or abstract) machines, while low-level programming (...) languages function as models of causal (or physical) machines. The resulting account suggests that sets of models embedded within models are indispensable for computer programming. (shrink)
The thesis of this paper is that extensional language alone provides an essentially inadequate foundation for the logical formalization of any lawlike statement. The arguments presented are intended to demonstrate that lawlike sentences are logically general dispositional statements requiring an essentially intensional reduction sentence formulation. By introducing a non-extensional logical operator, the 'fork', the difference between universal and statistical laws emerges in a distinction between dispositional predicates of universal strength as opposed to those of merely statistical strength. While the logical (...) form of universal and statistical laws appears to be fundamentally dissimilar on the standard account, from this point of view their syntactical structure is basically the same. (shrink)
An approach to inference to the best explanation integrating a Popperianconception of natural laws together with a modified Hempelian account of explanation, one the one hand, and Hacking's law of likelihood (in its nomicguise), on the other, which provides a robust abductivist model of sciencethat appears to overcome the obstacles that confront its inductivist,deductivist, and hypothetico-deductivist alternatives.This philosophy of scienceclarifies and illuminates some fundamental aspects of ontology and epistemology, especially concerning the relations between frequencies and propensities. Among the most important (...) elements of this conception is thecentral role of degrees of nomic expectability in explanation, prediction,and inference, for which this investigation provides a theoretical defense. (shrink)
Carl G. Hempel exerted greater influence upon philosophers of science than any other figure during the 20th century. In this far-reaching collection, distinguished philosophers contribute valuable studies that illuminate and clarify the central problems to which Hempel was devoted. The essays enhance our understanding of the development of logical empiricism as the major intellectual influence for scientifically-oriented philosophers and philosophically-minded scientists of the 20th century.
Cosmides, Wason, and Johnson-Laird, among others, have suggested evidence that reasoning abilities tend to be domain specific, insofar as humans do not appear to acquire capacities for logical reasoning that are applicable across different contexts. Unfortunately, the significance of these findings depends upon the specific variety of logical reasoning under consideration. Indeed, there seem to be at least three grounds for doubting such conclusions, since: (1) tests of reasoning involving the use of material conditionals may not be appropriate for representing (...) ordinary thinking, especially when it concerns causal processes involving the use of causal conditionals instead; (2) tests of domain specificity may fail to acknowledge the crucial role fulfilled by rules of inference, such as modus ponens and modus tollens, which appear to be completely general across different contexts; and, (3) tests that focus exclusively upon deductive reasoning may misinterpret findings involving the use of inductive reasoning, which is of primary importance for human evolution. (shrink)
The idea that human thought requires the execution of mental algorithms provides a foundation for research programs in cognitive science, which are largely based upon the computational conception of language and mentality. Consideration is given to recent work by Penrose, Searle, and Cleland, who supply various grounds for disputing computationalism. These grounds in turn qualify as reasons for preferring a non-computational, semiotic approach, which can account for them as predictable manifestations of a more adquate conception. Thinking does not ordinarily require (...) the execution of mental algorithms, which appears to be at best no more than one rather special kind of thinking. (shrink)
The purpose of this paper is to provide a systematic defense of the single-case propensity account of probabilistic explanation from the criticisms advanced by Hanna and by Humphreys and to offer a critical appraisal of the aleatory conception advanced by Humphreys and of the deductive-nomological-probabilistic approach Railton has proposed. The principal conclusion supported by this analysis is that the Requirements of Maximal Specificity and of Strict Maximal Specificity afford the foundation for completely objective explanations of probabilistic explananda, so long as (...) they are employed on the basis of propensity criteria of explanatory relevance. (shrink)
This paper pursues the question, To what extent does the propensity approach to probability contribute to plausible solutions to various anomalies which occur in quantum mechanics? The position I shall defend is that of the three interpretations — the frequency, the subjective, and the propensity — only the third accommodates the possibility, in principle, of providing a realistic interpretation of ontic indeterminism. If these considerations are correct, then they lend support to Popper's contention that the propensity construction tends to remove (...) (at least some of) the mystery from quantum phenomena. (shrink)
The purpose of this essay is to investigate the properties of singular causal systems and their population manifestations, with special concern for the thesis of methodological individualism, which claims that there are no properties of social groups that cannot be adequately explained exclusively by reference to properties of individual members of those groups, i.e., at the level of individuals. Individuals, however, may be viewed as singular causal systems, i.e., as instantiations of (arrangements of) dispositional properties. From this perspective, methodological individualism (...) appears to be an ambiguous thesis: some properties of collections of (independent) systems of the same kind are reducible, but other properties of collections of (dependent) systems of the same kind are not. In cases of the first kind, therefore, methodological individualism is true, but trivial; while in cases of the second kind, it is significant, but false. Hence, if the arguments that follow are correct, at least some of the properties of social groups should qualify as emergent. (shrink)
A debate over the theoretical capabilities of formal methods in computer science has raged for more than two years now. The function of this paper is to summarize the key elements of this debate and to respond to important criticisms others have advanced by placing these issues within a broader context of philosophical considerations about the nature of hardware and of software and about the kinds of knowledge that we have the capacity to acquire concerning their performance.
The distinguished theologian, David Ray Griffin, has advanced a set of thirteen theses intended to characterize (what he calls) "Neo-Darwinism" and which he contrasts with "Intelligent Design". Griffin maintains that Neo-Darwinism is "atheistic" in forgoing a creator but suggests that, by adopting a more modest scientific naturalism and embracing a more naturalistic theology, it is possible to find "a third way" that reconciles religion and science. The considerations adduced here suggest that Griffin has promised more than he can deliver. On (...) his account, God is in laws of nature; therefore, any influence He exerts is natural rather than supernatural. But if the differences God makes are not empirically detectable, then Griffin's account is just as objectionable as a theory of supernatural intervention. And Griffin has not shown that evolution as distinct from his idiosyncratic sense of Neo-Darwinism is incompatible with theism. (shrink)
Perhaps no technological innovation has so dominated the second half of the twentieth century as has the introduction of the programmable computer. It is quite difficult if not impossible to imagine how contemporary affairs—in business and science, communications and transportation, governmental and military activities, for example—could be conducted without the use of computing machines, whose principal contribution has been to relieve us of the necessity for certain kinds of mental exertion. The computer revolution has reduced our mental labors by means (...) of these machines, just as the Industrial Revolution reduced our physical labor by means of other machines. (shrink)