A common presupposition in the concepts literature is that concepts constitute a singular natural kind. If, on the contrary, concepts split into more than one kind, this literature needs to be recast in terms of other kinds of mental representation. We offer two new arguments that concepts, in fact, divide into different kinds: ( a ) concepts split because different kinds of mental representation, processed independently, must be posited to explain different sets of relevant phenomena; ( b ) concepts split (...) because different kinds of mental representation, processed independently, must be posited to explain responses to different kinds of category. Whether these arguments are sound remains an open empirical question, to be resolved by future empirical and theoretical work. *Received April 2005; revised May 2006. †To contact the authors, please write to: GualtieroPiccinini, Department of Philosophy, Washington University in St. Louis, One Brookings Drive, St. Louis, MO 63130‐4899; e‐mail: firstname.lastname@example.org . Sam Scott, 11‐1317 King Street West, Toronto, ON, M6K 1H2, Canada; e‐mail: SamScott@Canada.com . (shrink)
Machery argues that “philosophical theories of concepts” and “psychological theories of concepts” are about different things (31).2 To begin with, the expression “philosophical theory of concept” is somewhat obscure. Machery seems to use it as a synonym for “theory of concepts developed by a philosopher” (33, 34). Now, it may be true that some theories of concepts proposed by philosophers are about something different than the theories proposed by psychologists. But other theories of concepts proposed by professional philosophers – including (...) Machery! – are explicitly after same thing (e.g., Fodor 2008, which Machery cites). Roughly speaking, they are after the building blocks of thought. Slightly more precisely, concepts – the target of theories proposed by both psychologists and many philosophers – are representations posited to explain certain cognitive phenomena, including recognition, naming, inference, and language understanding (cf. Piccinini and Scott 2006, 396).3 To be sure, philosophers tend to emphasize aspects of concepts, such as how concepts get to represent categories, that.. (shrink)
We sketch a framework for building a unified science of cognition. This unification is achieved by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems. The core idea is that functional analyses are sketches of mechanisms , in which some structural aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are seamlessly integrated (...) with multilevel mechanistic explanations. (shrink)
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In (...) this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism on the other. We defend the relevance to cognitive science of both computation, in a generic sense that we fully articulate for the first time, and information processing, in three important senses of the term. Our account advances some foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects. (shrink)
As our data will show, negative existential sentences containing socalled empty names evoke the same strong semantic intuitions in ordinary speakers and philosophers alike.Santa Claus does not exist.Superman does not exist.Clark Kent does not exist.Uttering the sentences in (1) seems to say something truth-evaluable, to say something true, and to say something different for each sentence. A semantic theory ought to explain these semantic intuitions.The intuitions elicited by (1) are in apparent conflict with the Millian view of proper names. According (...) to Millianism, the meaning (or 'semantic value') of a proper name is just its referent. But empty names, such as 'Santa Claus' and 'Superman', appear to lack a .. (shrink)
Â I appeal to Merkerâ€™s theory to motivate a hypothesis about the ontology of consciousness:Â creature consciousness is (at least partially) constitutive of phenomenal consciousness.Â Rather than elaborating theories of phenomenal consciousness couched solely in terms of state consciousness, as philosophers are fond of doing, a correct approach to phenomenal consciousness should begin with an account of creature consciousness.
Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism. It concludes that computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing it comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind as well as psychology (...) and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done. (shrink)
Defending or attacking either functionalism or computationalism requires clarity on what they amount to and what evidence counts for or against them. My goal here is not to evaluate their plausibility. My goal is to formulate them and their relationship clearly enough that we can determine which type of evidence is relevant to them. I aim to dispel some sources of confusion that surround functionalism and computationalism, recruit recent philosophical work on mechanisms and computation to shed light on them, and (...) clarify how functionalism and computationalism may or may not legitimately come together. (shrink)
According to the zombie conceivability argument, phenomenal zombies are conceivable, and hence possible, and hence physicalism is false. Critics of the conceivability argument have responded by denying either that zombies are conceivable or that they are possible. Much of the controversy hinges on how to establish and understand what is conceivable, what is possible, and the link between the two—matters that are at least as obscure and controversial as whether consciousness is physical. Because of this, the debate over physicalism is (...) unlikely to be resolved by thinking about zombies—or at least, zombies as discussed by philosophers to date.
In this paper, I explore an alternative strategy against the zombie conceivability argument. I accept the possibility of zombies and ask whether that possibility is accessible (in the sense of ‘accessible’ used in possible world semantics) to our world. It turns out that the question of whether zombie worlds are accessible to our world is equivalent to the question of whether physicalism is true. By assuming that zombie worlds are accessible to our world, supporters of the zombie conceivability argument beg the question against physicalists. I will then consider what happens if a supporter of the zombie conceivability argument should insist that zombie worlds are accessible to our world. I will argue that the same ingredients used in the zombie conceivability argument—whatever they might be—can be used to construct an argument to the opposite conclusion. If that is correct, we reach a stalemate between physicalism and property dualism: while the possibility of some zombies entails property dualism, the possibility of other creatures entails physicalism. Since these two possibilities are inconsistent, one of them is not genuine. To resolve this stalemate, we need more than thought experiments. (shrink)
First-person data have been both condemned and hailed because of their alleged privacy. Critics argue that science must be based on public evidence: since first-person data are private, they should be banned from science. Apologists reply that first-person data are necessary for understanding the mind: since first-person data are private, scientists must be allowed to use private evidence. I argue that both views rest on a false premise. In psychology and neuroscience, the subjects issuing first-person reports and other sources of (...) first-person data play the epistemic role of a (self-) measuring instrument. Data from measuring instruments are public and can be validated by public methods. Therefore, first-person data are as public as other scientific data: their use in science is legitimate, in accordance with standard scientific methodology. (shrink)
According to the computational theory of mind (CTM), mental capacities are explained by inner computations, which in biological organisms are realized in the brain. Computational explanation is so popular and entrenched that it’s common for scientists and philosophers to assume CTM without argument.
Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theoristError: Illegal entry in bfrange block in ToUnicode (...) CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMaps of cognition be explicit and careful in choosing 1 notions of computation and information and connecting them together. Much confusion can be avoided by doing so. Keywords: computation, information processing, computationalism, computational theory of mind, cognitivism. (shrink)
According to some philosophers, computational explanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to computational explanation and outline some promising answers that (...) are being developed by a number of authors. (shrink)
The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without appealing to any semantic properties. The primary purpose of this paper is to formulate the alternative view of computational individuation, point out that it supports a robust notion of computational explanation, and defend it on the grounds of how computational (...) states are individuated within computability theory and computer science. A secondary purpose is to show that existing arguments for the semantic view are defective. (shrink)
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation.<br><br>.
Heterophenomenology is a third-person methodology proposed by Daniel Dennett for using first-person reports as scientific evidence. I argue that heterophenomenology can be improved by making six changes: (i) setting aside consciousness, (ii) including other sources of first-person data besides first-person reports, (iii) abandoning agnosticism as to the truth value of the reports in favor of the most plausible assumptions we can make about what can be learned from the data, (iv) interpreting first-person reports (and other first-person behaviors) directly in terms (...) of target mental states rather than in terms of beliefs about them, (v) dropping any residual commitment to incorrigibility of first-person reports, and (vi) recognizing that thirdperson methodology does have positive effects on scientific practices. When these changes are made, heterophenomenology turns into the self-measurement methodology of firstperson data that I have defended in previous papers. (shrink)
Some philosophers have conﬂated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conﬂation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions. # 2004 Elsevier Ltd. All rights reserved.
Introspective reports are used as sources of information about other minds, in both everyday life and science. Many scientists and philosophers consider this practice unjustified, while others have made the untestable assumption that introspection is a truthful method of private observation. I argue that neither skepticism nor faith concerning introspective reports are warranted. As an alternative, I consider our everyday, commonsensical reliance on each other’s introspective reports. When we hear people talk about their minds, we neither refuse to learn from (...) nor blindly accept what they say. Sometimes we accept what we are told, other times we reject it, and still other times we take the report, revise it in light of what we believe, then accept the modified version. Whatever we do, we have (implicit) reasons for it. In developing a sound methodology for the scientific use of introspective reports, we can take our commonsense treatment of introspective reports and make it more explicit and rigorous. We can discover what to infer from introspective reports in a way similar to how we do it every day, but with extra knowledge, methodological care, and precision. Sorting out the use of introspective reports as sources of data is going to be a painstaking, piecemeal task, but it promises to enhance our science of the mind and brain. (shrink)
I offer an explication of the notion of computer, grounded in the practices of computability theorists and computer scientists. I begin by explaining what distinguishes computers from calculators. Then, I offer a systematic taxonomy of kinds of computer, including hard-wired versus programmable, general-purpose versus special-purpose, analog versus digital, and serial versus parallel, giving explicit criteria for each kind. My account is mechanistic: which class a system belongs in, and which functions are computable by which system, depends on the system's mechanistic (...) properties. Finally, I briefly illustrate how my account sheds light on some issues in the history and philosophy of computing as well as the philosophy of mind. (shrink)
This article defends a modest version of the Physical Church-Turing thesis (CT). Following an established recent trend, I distinguish between what I call Mathematical CT—the thesis supported by the original arguments for CT— and Physical CT. I then distinguish between bold formulations of Physical CT, according to which any physical process—anything doable by a physical system—is computable by a Turing machine, and modest formulations, according to which any function that is computable by a physical system is computable by a Turing (...) machine. I argue that Bold Physical CT is not relevant to the epistemological concerns that motivate CT and hence not suitable as a physical analog of Mathematical CT. The correct physical analog of Mathematical CT is Modest Physical CT. I propose to explicate the notion of physical computability in terms of a usability constraint, according to which for a process to count as relevant to Physical CT, it must be usable by a finite observer to obtain the desired values of a function. Finally, I suggest that proposed counterexamples to Physical CT are still far from falsifying it because they have not been shown to satisfy the usability constraint. (shrink)
Roughly speaking, computationalism says that cognition is computation, or that cognitive phenomena are explained by the agent‘s computations. The cognitive processes and behavior of agents are the explanandum. The computations performed by the agents‘ cognitive systems are the proposed explanans. Since the cognitive systems of biological organisms are their nervous 1 systems (plus or minus a bit), we may say that according to computationalism, the cognitive processes and behavior of organisms are explained by neural computations. Some people might prefer to (...) say that cognitive systems are ―realized‖ by nervous systems, and thus that—according to computationalism—cognitive computations are ―realized‖ by neural processes. In this paper, nothing hinges on the nature of the relation between cognitive systems and nervous systems, or between computations and neural processes. For present purposes, if a neural process realizes a computation, then that neural process is a computation. Thus, I will couch much of my discussion in terms of nervous systems and neural computation.1 Before proceeding, we should dispense with a possible red herring. Contrary to a common assumption, computationalism does not stand in opposition to connectionism. Connectionism, in the most general and common sense of the term, is the claim that cognitive phenomena are explained (at some level and at least in part) by the processes of neural networks. This is a truism, supported by most neuroscientific evidence. Everybody ought to be a connectionist in this general sense. The relevant question is, are neural processes computations? More precisely, are the neural processes to be found in the nervous systems of organisms computations? Computationalists say ―yes‖, anti-computationalists say ―no‖. This paper investigates whether any of the arguments on offer against computationalism have a chance at knocking it off.2 Ever since Warren McCulloch and Walter Pitts (1943) first proposed it, computationalism has been subjected to a wide range of objections.. (shrink)
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, for (...) Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.
In the 1950s, Alan Turing proposed his influential test for machine intelligence, which involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing''s rules for the test have been given. According to the standard reading of Turing''s words, the goal of the interrogator was to discover which was the human being and which was the machine, while the goal of the machine was to be indistinguishable from a human being. According to the literal (...) reading, the goal of the machine was to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – was attempting to determine which of the two contestants was the woman and which was the man. The present work offers a study of Turing''s rules for the test in the context of his advocated purpose and his other texts. The conclusion is that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing''s work faces severe interpretative difficulties. So, the controversy over Turing''s rules should be settled in favor of the standard reading. (shrink)
This paper offers an account of what it is for a physical system to be a computing mechanism—a system that performs computations. A computing mechanism is a mechanism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule that applies to all relevant strings and depends on the input strings and (possibly) internal states for its application. This account is motivated by reasons endogenous to the philosophy of computing, namely, doing (...) justice to the practices of computer scientists and computability theorists. It is also an application of recent literature on mechanisms, because it assimilates computational explanation to mechanistic explanation. The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power. (shrink)
Introspection used to be excluded from science because it isn?t public--for any question about mental states, only the person whose states are in question can answer by introspecting. However, we often use introspective reports to gauge each other?s minds, and contemporary psychologists generate data from them. I argue that some uses of introspection are as public as any scientific method.
Despite its significance in neuroscience and computation, McCulloch and Pitts's celebrated 1943 paper has received little historical and philosophical attention. In 1943 there already existed a lively community of biophysicists doing mathematical work on neural networks. What was novel in McCulloch and Pitts's paper was their use of logic and computation to understand neural, and thus mental, activity. McCulloch and Pitts's contributions included (i) a formalism whose refinement and generalization led to the notion of finite automata (an important formalism in (...) computability theory), (ii) a technique that inspired the notion of logic design (a fundamental part of modern computer design), (iii) the first use of computation to address the mind–body problem, and (iv) the first modern computational theory of mind and brain. (shrink)
Abstract: According to the Veridicality Thesis, information requires truth. On this view, smoke carries information about there being a fire only if there is a fire, the proposition that the earth has two moons carries information about the earth having two moons only if the earth has two moons, and so on. We reject this Veridicality Thesis. We argue that the main notions of information used in cognitive science and computer science allow A to have information about the obtaining of (...) p even when p is false. (shrink)
Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the (...) current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience. (shrink)
The Church–Turing Thesis (CTT) is often employed in arguments for computationalism. I scrutinize the most prominent of such arguments in light of recent work on CTT and argue that they are unsound. Although CTT does nothing to support computationalism, it is not irrelevant to it. By eliminating misunderstandings about the relationship between CTT and computationalism, we deepen our appreciation of computationalism as an empirical hypothesis.
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions.
I argue that neural activity, strictly speaking, is not computation. This is because computation, strictly speaking, is the processing of strings of symbols, and neuroscience shows that there are no neural strings of symbols. This has two consequences. On the one hand, the following widely held consequences of computationalism must either be abandoned or supported on grounds independent of computationalism: (i) that in principle we can capture what is functionally relevant to neural processes in terms of some formalism taken from (...) computability theory (such as Turing Machines), (ii) that it is possible to design computer programs that are functionally equivalent to neural processes in the same sense in which it is possible to design computer programs that are functionally equivalent to each other, (iii) that the study of neural (or mental) computation is independent of the study of neural implementation, (iv) that the Church-Turing thesis applies to neural activity in the sense in which it applies to digital computers. On the other hand, we need to gradually reinterpret or replace computational theories in psychology in terms of theoretical constructs that can be realized by known neural processes, such as the spike trains of neuronal ensembles. (shrink)
We argue that Machery provides no convincing evidence that prototypes and exemplars are typically used in distinct cognitive processes. This partially undermines the fourth tenet of the Heterogeneity Hypothesis and thus casts doubts on Machery’s way of splitting concepts into different kinds. Although Machery may be right that concepts split into different kinds, such kinds may be different from those countenanced by the Heterogeneity Hypothesis.
Hurlburt and Schwitzgebel’s groundbreaking book, Describing Inner Experience: Proponent Meets Skeptic, examines a research method called Descriptive Experience Sampling (DES). DES, which was developed by Hurlburt and collaborators, works roughly as follows. An investigator gives a subject a random beeper. During the day, as the subject hears a beep, she writes a description of her conscious experience just before the beep. The next day, the investigator interviews the subject, asks for more details, corrects any apparent mistakes made by the subject, (...) and draws conclusions about the subject’s mind. Throughout the book, Schwitzgebel challenges some of Hurlburt’s specific conclusions. Yet both agree – and so do I – that DES is a worthy method. (shrink)
The following three theses are inconsistent: (1) (Paradigmatic) connectionist systems perform computations. (2) Performing computations requires executing programs. (3) Connectionist systems do not execute programs. Many authors embrace (2). This leads them to a dilemma: either connectionist systems execute programs or they don't compute. Accordingly, some authors attempt to deny (1), while others attempt to deny (3). But as I will argue, there are compelling reasons to accept both (1) and (3). So, we should replace (2) with a more satisfactory (...) account of computation. Once we do, we can see more clearly what is peculiar to connectionist computation. (shrink)
This paper offers an account of what it is for a physical system to be a computing mechanism—a mechanism that performs computations. A computing mechanism is any mechanism whose functional analysis ascribes it the function of generating outputs strings from input strings in accordance with a general rule that applies to all strings. This account is motivated by reasons that are endogenous to the philosophy of computing, but it may also be seen as an application of recent literature on mechanisms. (...) The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power. This makes it ideal for grounding the comparison and assessment of computational theories of mind and brain. (shrink)
Epistemic divergence occurs when different investigators give different answers to the same question using evidence-collecting methods that are not public. Without following the principle that scientific methods must be public, scientific communities risk epistemic divergence. I explicate the notion of public method and argue that, to avoid the risk of epistemic divergence, scientific communities should (and do) apply only methods that are public.
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous (...) signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. (shrink)
Newell was a founder of artificial intelligence (AI) and a pioneer in the use of computer simulations in psychology. In collaboration with J. Cliff Shaw and Herbert A. <span class='Hi'>Simon</span>, Newell developed the first list-processing programming language as well as the earliest computer programs for simulating human problem solving. Over a long and prolific career, he contributed to many techniques, such as protocol analysis and heuristic search, that are now part of psychology and computer science. Colleagues remembered Newell for his (...) deep commitment to science, his care for details, and his inexhaustible energy. (shrink)
“One should always cherish some ambition to do something in the world. They alone rise who strive.” is the great wording of Dr.Ambedkar. There are two fundamental types of human nature. Creative and possessive. Creative humans use human intellect for creative endeavors which enriches human thought; knowledge and wealth thereby contribute to the development of human heritage for the posterity. Possessive people, on the other hand do not believe in the use of human intellect for creative purpose. Gautam Buddha, Jesus (...) Christ, Guru Nanak, Kabeer, Ravidas, Tukarama, Krantiba Jotirao Phoolay, Periyar and Dr. Babasaheb Ambedkar they all belong to the great class of Ceative humans called as Humanists in Indian context. Here we studies Ambedkar’s views related to humanism and Buddhism. (shrink)
The present education does riot yield required results mainly because it is divorced from the real social content and social goals. We as the citizens of the republic are constitutionally committed to democracy, social justice, equality of opportunity, secularism and above all to a welfare state. Educational policy and educational programmes should not merely equip an individual to adjust with society to its customs and conventions, but it should enable him to bring desirable changes in the society. Every educational institute (...) from secondary school to University College should be developed to become an agency of change, it is the dream of Dr. B.R .Ambedkar. (shrink)
∗A special thanks to those who have assisted my archival research, including Aldo Antonelli, John Burgess, Michael Della Rocca, Herbert Enderton, Bernard Linsky, Heidi Lockwood, Ruth Barcan Marcus, Julien Murzi and Bas van Fraassen. An extra special thanks to Julien Murzi, who as my research assistant in the Fall of 2005 helped me to identify and think more clearly about the famous anonymous referee reports, which are central to the present paper. For discussion and/or assistance I am also grateful to (...) many others, including Scott Berman, Berit Brogaard, Judy Crane, Susan Brower- Toland, David Chalmers, Solomon Feferman, Nick Griﬃn, Michael Hand, Monte Johnson, Jon Kvanvig, Matthias Lutz-Bachmann, Robert Meyer, Andreas Niederberger, GualtieroPiccinini, Graham Priest, Krister Segerberg, Wilfried Sieg, Roy Sorensen, Kent Staley, Jim Stone, Neil Tennant, Achille Varzi, Nick Zavediuk, anonymous readers for OUP, and audience members at the Paciﬁc APA in Portland (March 24, 2006), the Goethe University of Frankfurt (May 15, 2006), the Institute for Logic, Language and Computation at the University of Amsterdam (May 23, 2006), and the Namicona Epistemology Workshop, at the University of Copenhagen (August 22, 2006). Thanks also to my department at Saint Louis University for granting time and resources to research and write the paper. (shrink)
Dr. Evil learns that a duplicate of Dr. Evil has been created. Upon learning this, how seriously should he take the hypothesis that he himself is that duplicate? I answer: very seriously. I defend a principle of indifference for self-locating belief which entails that after Dr. Evil learns that a duplicate has been created, he ought to have exactly the same degree of belief that he is Dr. Evil as that he is the duplicate. More generally, the principle shows that (...) there is a sharp distinction between ordinary skeptical hypotheses, and self-locating skeptical hypotheses. (shrink)
Nicholas Rescher claims that rational decision theory “may leave us in the lurch”, because there are two apparently acceptable ways of applying “the standard machinery of expected-value analysis” to his Dr. Psycho paradox which recommend contradictory actions. He detects a similar contradiction in Newcomb’s problem. We consider his claims from the point of view of both Bayesian decision theory and causal decision theory. In Dr. Psycho and in Newcomb’s Problem, Rescher has used premisses about probabilities which he assumes to be (...) independent. From the former point of view, we show that the probability premisses are not independent but inconsistent, and their inconsistency is provable within probability theory alone. From the latter point of view, we show that their consistency can be saved, but then the contradictory recommendations evaporate. Consequently, whether one subscribes to evidential or causal decision theory, rational decision theory is not in any way vitiated by Rescher’s arguments. (shrink)
In this article, I explore the relationship between the philosophy of Theodor Adorno and the Bilderverbot , or biblical Second Commandment against images. My starting point is J. F. Lyotard's construction of the melancholic sublime in his essay `What is the Postmodern?', which I argue he uses to critique Adorno's aesthetics, and, more generally, his position as a `modern' thinker. To prove that Lyotard had Adorno in mind when he constructed the category of the melancholic sublime, I return to an (...) earlier piece by Lyotard — `Adorno as the Devil' — which is a reading of Thomas Mann's Dr Faustus , in which Adorno is said to be one of the faces of the Devil. My argument is that Lyotard's understanding of Adorno is flawed because he does not recognize the distinctly Jewish, albeit secularized, character of his thought. I set out to challenge Lyotard by demonstrating the central importance that the Bilderverbot plays in Adorno's work, which should not be understood as melancholic because the Jewish Messianism associated with the Bilderverbot is profoundly future-oriented. In short, I argue that Lyotard's depiction of Adorno is flawed because he reads him as a Christian, while he should be approaching him as a secularized Jew. Key Words: Theodor Adorno • aesthetic theory • Dr Faustus • the image prohibition • Jewish thought • Jean-François Lyotard • Thomas Mann • Messianism • representation • the sublime. (shrink)
In October 1775, David Hume wrote to his printer William Strahan, requesting that an ‘Advertisement’ should be attached to remaining copies of the second volume of his Essays and Treatises on Several Subjects. This volume contained his two Enquiries, the Dissertation on the Passions, and The Natural History of Religion, and the Advertisement states that these works should ‘alone be regarded as containing his philosophical sentiments and principles’ (E 2). In the covering letter, Hume comments that this ‘is a compleat (...) Answer to Dr Reid and to that bigotted silly Fellow, Beattie.’ (HL ii. 301). My aim here is to try to throw light on what Hume might have meant by this comment, and to assess to what extent it might have been justified. (shrink)
In her 2006 book ‘‘My Stroke of Insight” Dr. Jill Bolte Taylor relates her experience of suffering from a left hemispheric stroke caused by a congenital arteriovenous malformation which led to a loss of inner speech. Her phenomenological account strongly suggests that this impairment produced a global self-awareness deficit as well as more specific dysfunctions related to corporeal awareness, sense of individuality, retrieval of autobiographical memories, and self-conscious emotions. These are examined in details and corroborated by numerous excerpts from Taylor’s (...) book. Ó 2008 Elsevier Inc. All rights reserved. (shrink)
The nationally-famous advocate of physician-assisted suicide did not die by his own hand. Dr. Jack Kevorkian died the old-fashioned way in America: in a hospital, with multiple disorders undercutting his life. Kevorkian took up interest in assisted suicide early in his medical career, and he wanted prisoners on death row to volunteer for experiments just before their execution. Kevorkian saw individual consent as the wheel, axle, and grease for all decisions in these matters. He helped many people die, but it (...) is unclear what moral principle guided his decisions to say yes and no to requests for help in dying. His spree in helping people die came to an end, when he himself injected a man with a lethal substance. Because of his single-minded focus on the value of assisted suicide and experimentation before execution, he had little impact on the broader ethical analysis of assisted-suicide and the rights of prisoners. He leaves little legacy in ethics for the analysis of assisted-suicide or in vivo experimentation. (shrink)
In “Concepts Are Not a Natural Kind” (2005), I argued that the notion of concept in psychology and in neuropsychology fails to pick out a natural kind. Piccinini and Scott (2006, in this issue) have criticized the argument I used to support this conclusion. They also proposed two alternative arguments for a similar conclusion. In this reply, I rebut Piccinini and Scott’s main objection against the argument proposed in “Concepts Are Not a Natural Kind.” Moreover, I show that (...) the two alternative arguments de- veloped by Piccinini and Scott are not promising for supporting the conclusion that concepts are not a natural kind. (shrink)
In several works, Frege argues that content is objective (i.e., thethoughts we entertain and communicate, and the senses of which theyare composed, are public, not private, property). There are, however,some remarks in the Fregean corpus that are in tension with this view.This paper is centered on an investigation of the most notorious andextreme such passage: the `Dr. Lauben example, from Frege (1918). Aprincipal aim is to attain more clarity on the evident tension withinFreges views on content, between this dominant objectivism (...) and someelements that seem to run counter to it, via developing an understandingof the `Dr. Lauben example. Then I will argue that this interpretation goes some way toward undermining some prevalent contemporary viewsabout language. Based on the advice of Dr. Lauben, I will argue againsta certain understanding of the causal-historical theory of reference –more specifically, of the phenomenon of deferential uses of linguisticexpressions – upon which these views are premised, and I will drawout some morals that pertain to individualism and competence. (shrink)
This essay is a discussion of the radio talk show host Dr. Laura Schlessinger. It is an assessment of the moral advice that she dispenses her radio show, and kinds of criticisms to which she has been subjected.
In December 1980 an elementary school teacher in Minnesota obtained a Restraining Order to ensure that a severely brain damaged friend would receive emergency medical care in her nursing home if she needed it. This situation focussed attention on the need for better understanding, among medical professionals and consumers alike, of the significance of a No Dr. Blue/Do Not Resuscitate order.
The Strange Case of Dr. B and Mr. Hide: Ethical Sensitivity as a Means to Reflect Upon One’s Actions in Managing Conflict of Interest Content Type Journal Article Category Case Studies Pages 1-3 DOI 10.1007/s11673-012-9360-4 Authors Marie-Josée Potvin, Programmes de bioéthique, Department of Social and Preventive Medicine, Université de Montréal, C.P. 6128, succ. Centre-ville, Montréal, Québec, Canada H3C 3J7 Journal Journal of Bioethical Inquiry Online ISSN 1872-4353 Print ISSN 1176-7529.
The one great quality of Socratic gift is that thinking as an activity continues but not repetitively but every time thinking takes place, it takes place a new. Thinking is the one activity that cannot be repeated like prayers and other pieties. All philosophical thinking is new thinking; it has to be new in order to be thinking. Philosophy had to become the handmaid of sociology and could not be allowed to remain surrogate sociology. When this happened new concepts or (...) new conceptualizations became the need of the hour: in the place of the age-old hierarchic social stratification a novel concept of materialism had to be inducted - after all matter is what matters. And in India morally entangled sociology was holding down the rich human resources of the sub-continent and a development-oriented ideology had to convert this moral society into a legal society: An unlegislated, unlegislatable society is condemned to be unstable andcollapsible; in its place a stable, legislatable society had to be created. With this felt-need Dr. Ambedkar came into the Indian political arena and gave a modernist rethinking to the outmoded Indian social structure: His hallmark was think to change. (shrink)
The Reply to Dr. Rolfs essay makes the following main points: (1) The logic of inexactness has the same syntax as Kleene's three-valued logic. Its semantics is different in that the third truth-value can by choice be correctly turned into either truth or falsehood. (2) The definition of resemblance classes includes, but is not exhausted by, ostensive rules. (3) The application of classical mathematics to sense-experience consists in the limited identification of non-isomorphic structures. (4) There are exact perceptual and vague (...) mathematical concepts. (5) The distinction between my categorial framework, a categorial framework and the true categorial framework, if any, is neither relativistic nor absolutistic. (shrink)
Jednym z elementów współczesnej kultury są seriale telewizyjne, w przeważającej mierze charakteryzujące się brakiem jakichkolwiek wartości artystycznych oraz intelektualnych. Do nielicznych pod tym względem należy serial pt. Dr House. Centralną kwestią w tym serialu jest postawa moralna głównego bohatera. Krytycy dostrzegli w niej wiele analogii do moralności Nietzscheańskiego nadczłowieka. W artykule podjęto próbę ukazania, że w moralności dr House'a odzwierciedla się Nietzscheański model estetyzacji moralności, polegający na tym, że kryterium etycznej słuszności czynów jest wolność jednostki i jej autonomiczność w kształtowaniu (...) swego życia jako dzieła sztuki. (shrink)
After identifying points of agreement between Karl Rahner and Hans Urs von Balthasar on topics raised by Dr. Sain’s essay, this response raises questions about the deeper foundations of the substantial differences between them. It suggests that the appeal to contrast in their starting-points (Goethe versus Kant) as an explanation is not adequate and suggests lines of further inquiry which might be pursued further.
Obscurity is not the worst failing, and it is philistinism to pretend that it is. In a series of brilliant essays written over the last fifteen years Stanley Cavell has consistently argued that more important than the question whether obscurity could have been avoided is whether it affects our confidence in the author. Confidence raises the issue of intention, and I would have thought that the primary commitment of a psychoanalytic writer was to pass on, and (if he can) to (...) refine while passing on, a particular way of exploring the mind. Indeed this is how Lacan himself proposes that his work should be judged. “The aim of my teaching,” he writes, “has been and still is the training of analysts.” For decades now Lacan has been insisting that the nature of this commitment has been systematically obscured, particularly in North America. Training has become “routinized”, and analysis itself has become distorted into a process of crude social adaptation. There is much here to agree with. Yet two questions must be raised. Has Lacan devised a more effective method of training analysts? And, would one expect this from his writings? Neither question gets a favourable answer. All reports of his training methods, over which he has now brought about three distinct secessions within the French psychoanalytic movement, are horrifying. 13 It is now, I am told, possible to become a Lacanian analyst after a very few months of Lacanian analysis. And what pedagogic contribution could we expect from a form of prose that has two salient characteristics: it exhibits the application of theory to particular cases as quite arbitrary, and it forces the adherents it gains into pastiche. 14 Lacan's ideas and Lacan's style, yoked in an indissoluble union, represent an invasive tyranny. And it is by a hideous irony that this tyranny should find its recruits among groups that have nothing in common except the sense that they lack a theory worthy of their cause or calling: feminists, cinéastes , professors of literature. Lacan himself offers several justifications for his obscurity, about which he has no false modesty. At times he says that he is the voice, the messenger, the porte-parole , of the unconscious itself. Lacan's claim stirs in my mind the retort Freud made to a similar assult upon his credulity and by someone who had learned from Lacan. “It is not the unconscious mind I look out for in your paintings,” Freud said to Salvador Dali, “it is the conscious.”. (shrink)
A theory of punishment should tell us not only when punishment is permissible but also when it is a duty. It is not clear whether McCloskey's retributivism is supposed to do this. His arguments against utilitarianism consist largely in examples of punishments unacceptable to the common moral consciousness but supposedly approved of by the consistent utilitarian. We remain unpersuaded to abandon our utilitarianism. The examples are often fanciful in character, a point which (pace McCloskey) does rob them of much of (...) their force. If there was no tension between utilitarian precepts and those which come naturally to plain men, utilitarianism could have no claim to provide a critique of moralities. The utilitarian's attitude to such tensions is somewhat complicated, but what is certain is that there is more room in his system for the sentiments to which McCloskey appeals against him than McCloskey realizes. We agree with McCloskey, however, on the absurdity of substituting rule?utilitarianism for act?utilitarianism as an answer to his attacks. The distinction itself may represent a conceptual confusion. In our view, indeed, unmodified act?utilitarianism provides the best moral basis for thought about punishment. (shrink)
In this essay, I use a thought experiment to illustrate the human predicament if determinism is true, then draw the implications of this result for human rationality. This paper was read at the Eastern Division of the Society for Christian Philosophers at Assumption College in Worcester, Massachusetts in 2009.
Twenty years have passed since Gould and Lewontin published their critique of ‘the adaptationist program’ – the tendency of some evolutionary biologists to assume, rather than demonstrate, the operation of natural selection. After the ‘Spandrels paper’, evolutionists were more careful about producing just-so stories based on selection, and paid more attention to a panoply of other processes. Then came reactions against the excesses of the anti-adaptationist movement, which ranged from a complete dismissal of Gould and Lewontin’s contribution to a positive (...) call to overcome the problems. We now have an excellent opportunity for finally affirming a more balanced and pluralistic approach to the study of evolutionary biology. (shrink)
• Check that the text is complete and that all figures, tables and their legends are included. Also check the accuracy of special characters, equations, and electronic supplementary material if applicable. If necessary refer to the Edited manuscript.