This paper revisits the often debated question Can machines think? It is argued that the usual identification of machines with the notion of algorithm has been both counter-intuitive and counter-productive. This is based on the fact that the notion of algorithm just requires an algorithm to contain a finite but arbitrary number of rules. It is argued that intuitively people tend to think of an algorithm to have a rather limited number of rules. The paper will further propose a modification (...) of the above mentioned explication of the notion of machines by quantifying the length of an algorithm. Based on that it appears possible to reconcile the opposing views on the topic, which people have been arguing about for more than half a century. (shrink)
Machine generated contents note: -- Preface -- Acknowledgments -- Introduction, by Michael Weisberg and Jeffrey Kovac. -- 1 Trying to Understand, Making Bonds, by Roald Hoffmann -- Part 1: Chemical Reasoning and Explanation -- 2. Why Buy That Theory?, by Roald Hoffmann. -- 3. What Might Philosophy of Science Look Like If Chemists Built It?, by Roald Hoffmann -- 4. Unstable, by Roald Hoffmann -- 5. Nearly Circular Reasoning, by Roald Hoffmann -- 6. Ockham's Razor (...) and Chemistry, by Roald Hoffmann, Vladimir I. Minkin, and Barry K. Carpenter -- 7. Qualitative Thinking in the Age of Modern Computational Chemistry, or What Lionel Salem Knows, by Roald Hoffmann -- 8. Narrative, by Roald Hoffmann -- 9. Learning from Molecules in Distress, by Roald Hoffmann and Henning Hopf -- 10. Why Think Up New Molecules? by Roald Hoffmann -- 11. Protean, by Roald Hoffmann and Pierre Laszlo -- 12. How Should Chemists Think? by Roald Hoffmann -- Part 2: Writing and Communicating in Chemistry -- 13. Under the Surface of the Chemical Article, by Roald Hoffmann -- 14. Representation in Chemistry, by Roald Hoffmann and Pierre Laszlo -- 15.. The Say of Things, by Roald Hoffmann and Pierre Laszlo -- 16. How Symbolic and Iconic Languages Bridge the Two Worlds of the Chemist: A Case Study from Contemporary Bioorganic Chemistry, by Emily R. Grosholz and Roald Hoffmann -- 17 How Nice to Be an Outsider, by Roald Hoffmann -- 18. The Metaphor, Unchained, by Roald Hoffmann, -- Part 3: Art and Science -- 19. Art in Science? by Roald Hoffmann -- 20. Science and Crafts by Roald Hoffmann -- 21. Molecular Beauty, by Roald Hoffmann -- Part 4 Chemical Education -- 22. Teach to Search by Roald Hoffmann -- 23. Some Heretical Thoughts on What Our Students Are Telling Us, by Roald Hoffmann and Brian P. Coppola -- 24 Very Specific Teaching Strategies, and Why They Work, by Roald Hoffmann and Saundra Y. McGuire -- Part 5 Ethics in Science -- 25. Mind the Shade, by Roald Hoffmann -- 26. Science and Ethics: A Marriage of Necessity and Choice for this Millennium," by Roald Hoffmann -- 27. Honesty to the Singular Object, by Roald Hoffmann -- 28. The Material and Spiritual Rationales Are Inseparable, by Roald Hoffmann -- Index. (shrink)
Discussions concerning belief revision, theorydevelopment, and ``creativity'' in philosophy andAI, reveal a growing interest in Peirce'sconcept of abduction. Peirce introducedabduction in an attempt to providetheoretical dignity and clarification to thedifficult problem of knowledge generation. Hewrote that ``An Abduction is Originary inrespect to being the only kind of argumentwhich starts a new idea'' (Peirce, CP 2.26).These discussions, however, led to considerabledebates about the precise way in which Peirce'sabduction can be used to explain knowledgegeneration (cf. Magnani, 1999; Hoffmann, 1999).The crucial question (...) is that of understandinghow we can get the new elements capableof enlarging our theories. Under thesecircumstances, it might be helpful to step outof the entanglement and reconsider the basis ofthe problem that originally triggered Peirce'sinterest in abduction. This will lead us toanother Peircean concept, that of ``diagrammaticreasoning,'' which I discuss here in the contextof his ``pragmatism.'' In this way, I hope toreach a better understanding of thecontribution of ``abduction'' to the knowledgegeneration process. (shrink)
: The contributions to this volume originate from the workshop "Hauptsachen und Nebendinge—Pure Science and its Impurities," organized by Christoph Hoffmann, which took place at the Max-Planck-Institute for the History of Science (Berlin) in July 2000. We wish to thank all participants for rich and stimulating talks and discussions.
This experiment investigated the use of positive and negative hypothesis and target tests by groups in an adaptation of the 2-4-6 Wason task. The experimental variables were range of rule (small vs large), amount of evidence (low vs high), and trial block (1 vs 2). The results were in accordance with Klayman and Ha's (1987) analysis of base rate probabilities of falsification and with additional theoretical considerations. Base rate probabilities were more descriptive of participants' behaviour in target than in hypothesis (...) tests, under low than under high amount of evidence, and at the beginning of the process than at its end. The percentage of positive tests was higher under small than large range of rule. More falsifications than verifications resulted from hypothesis tests than would be expected by a random process. When evidence is richly available, the relative importance of falsification seems to decrease. An analysis of the group compositions before and after group discussion by the PCD model (Crott, Werner, & Hoffmann, 1996) revealed that the normative weight was approximately twice as large as the informational. Groups produced fewer false answers than their members individually. (shrink)
The author presents Gernot Böhme’s median mode of being theory, which attempts to find an anthropological middle ground between the rational and the irrational, the spiritual and the corporeal and the active and passive in human experience. Böhme’s reflections on the median mode of being are normative in character and linked to the concept of “sovereign man,” which he strongly defends and whose main characteristics Hoffmann outlines in the first part of the essay. Among others, Hoffmann argues against (...) Böhme’s excessive emphasis on the controlling/restrictive functions of awareness at the cost of those functions which serve to protect and stimulate life, his non-distinction between the distance to a cognized object and its intellectual instrumentalisation, and his rather one-sided tendency to seek the sources of European rationalism in the Socra-tean tradition. (shrink)
Turning to a brief consideration of United States foreign policy, Hoffmann points to particular moral difficulties in U.S. stances and urges the development of superpower rules that are effective and ethical.
An action-oriented perspective changes the role of an individual from a passive observer to an actively engaged agent interacting in a closed loop with the world as well as with others. Cognition exists to serve action within a landscape that contains both. This chapter surveys this landscape and addresses the status of the pragmatic turn. Its potential influence on science and the study of cognition are considered (including perception, social cognition, social interaction, sensorimotor entrainment, and language acquisition) and its impact (...) on how neuroscience is studied is also investigated (with the notion that brains do not passively build models, but instead support the guidance of action). A review of its implications in robotics and engineering includes a discussion of the application of enactive control principles to couple action and perception in robotics as well as the conceptualization of system design in a more holistic, less modular manner. Practical applications that can impact the human condition are reviewed (e.g., educational applications, treatment possibilities for developmental and psychopathological disorders, the development of neural prostheses). All of this foreshadows the potential societal implications of the pragmatic turn. The chapter concludes that an action-oriented approach emphasizes a continuum of interaction between technical aspects of cognitive systems and robotics, biology, psychology, the social sciences, and the humanities, where the individual is part of a grounded cultural system. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
On rationalist infallibilism, a wide range of both (i) analytic and (ii) synthetic a priori propositions can be infallibly justified (or absolutely warranted), i.e., justified to a degree that entails their truth and precludes their falsity. Though rationalist infallibilism is indisputably running its course, adherence to at least one of the two species of infallible a priori justification refuses to disappear from mainstream epistemology. Among others, Putnam (1978) still professes the a priori infallibility of some category (i) propositions, while Burge (...) (1986, 1988, 1996) and Lewis (1996) have recently affirmed the a priori infallibility of some category (ii) propositions. In this paper, I take aim at rationalist infallibilism by calling into question the a priori infallibility of both analytic and synthetic propositions. The upshot will be twofold: first, rationalist infallibilism unsurprisingly emerges as a defective epistemological doctrine, and second, more importantly, the case for the a priori infallibility of one or both categories of propositions turns out to lack cogency. (shrink)
Previous research has shown that subliminally presented stimuli accelerate or delay responses afforded by supraliminally presented stimuli. Our experiments extend these findings by showing that unconscious stimuli even affect free choices between responses. Thus, actions that are phenomenally experienced as freely chosen are influenced without the actor becoming aware of the manipulation. However, the unconscious influence is limited to a response bias, as participants chose the primed response only in up to 60% of the trials. LRP data in free choice (...) trials indicate that the prime was not ineffective in trials in which participants chose the non-primed response as then it delayed performance of the incongruently primed response. (shrink)
Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphological computation”. In this article, we briefly review different notions of computation by physical systems and (...) propose the dynamical systems framework as the most useful in the context of describing and eventually designing the interactions of controllers and bodies. Then, we look at the pros and cons of simple vs. complex bodies, critically reviewing the attractive notion of ”soft” bodies automatically taking over control tasks. We address another key dimension of the design space—whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies. (shrink)
Among the many problems posed by Peirce's concept of abduction is how to determine the scope of this form of inference, and how to distinguish different types of abduction. This problem can be illustrated by taking a look at one of his best known definitions of the term:Abduction is the process of forming an explanatory hypothesis. It is the only logical operation which introduces any new idea; for induction does nothing but determine a value, and deduction merely evolves the necessary (...) consequences of a pure hypothesis.The second half of this quote is not part of the definition, but an explanation of it. However, it adds something to this definition because it says implicitly that there are only three logical .. (shrink)
Tailoring the design of robot bodies for control purposes is implicitly performed by engineers, however, a methodology or set of tools is largely absent and optimization of morphology (shape, material properties of robot bodies, etc.) is lag- ging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or "soft" bodies. These carry substantial potential regarding their exploitation for control – sometimes referred to as "mor- phological computation" in the sense of offloading computation (...) needed for control to the body. Here, we will argue in favor of a dynamical systems rather than com- putational perspective on the problem. Then, we will look at the pros and cons of simple vs. complex bodies, critically reviewing the attractive notion of “soft” bodies automatically taking over control tasks. We will address another key dimension of the design space – whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies. ----- This paper was also published in the 2014 AISB proceedings http://aisb50.org/representation-of-reality-humans-animals-and-machines/ - http://doc.gold.ac.uk/aisb50/. (shrink)
As a committee of the National Academy of Engineering recognized, ethics education should foster the ability of students to analyze complex decision situations and ill-structured problems. Building on the NAE’s insights, we report about an innovative teaching approach that has two main features: first, it places the emphasis on deliberation and on self-directed, problem-based learning in small groups of students; and second, it focuses on understanding ill-structured problems. The first innovation is motivated by an abundance of scholarly research that supports (...) the value of deliberative learning practices. The second results from a critique of the traditional case-study approach in engineering ethics. A key problem with standard cases is that they are usually described in such a fashion that renders the ethical problem as being too obvious and simplistic. The practitioner, by contrast, may face problems that are ill-structured. In the collaborative learning environment described here, groups of students use interactive and web-based argument visualization software called “AGORA-net: Participate – Deliberate!”. The function of the software is to structure communication and problem solving in small groups. Students are confronted with the task of identifying possible stakeholder positions and reconstructing their legitimacy by constructing justifications for these positions in the form of graphically represented argument maps. The argument maps are then presented in class so that these stakeholder positions and their respective justifications become visible and can be brought into a reasoned dialogue. Argument mapping provides an opportunity for students to collaborate in teams and to develop critical thinking and argumentation skills. (shrink)
Metaphysical nihilism is the thesis that there could have been no concrete objects. Thomas Baldwin (1996) offers an argument for metaphysical nihilism. The premisses of the argument purport to provide a procedure of subtraction that can be iterated until we reach a world where no concrete objects exist. Gonzalo Rodriguez-Pereyra (1997) finds fault with Baldwin’s argument, modifies it, and claims to have proved metaphysical nihilism. My primary aim is to show that Rodriguez-Pereyra’s alleged proof rests on a false assumption. The (...) assumption is that, necessarily, if there are no concrete* objects (in the sense Rodriguez-Pereyra defines), then there are no concrete objects. My secondary aim, with which I begin, is to formulate and then strengthen a succinct version of the subtraction argument. (shrink)
Abductive reasoning takes place in forming``hypotheses'''' in order to explain ``facts.'''' Thus, theconcept of abduction promises an understanding ofcreativity in science and learning. It raises,however, also a lot of problems. Some of them will bediscussed in this paper. After analyzing thedifference between induction and abduction (1), Ishall discuss Peirce''s claim that there is a ``logic''''of abduction (2). The thesis is that this claim can beunderstood, if we make a clear distinction between inferential elements and perceptive elements of abductive reasoning. For (...) Peirce, the creative act offorming explanatory hypotheses and the emergence of``new ideas'''' belongs exclusively to the perceptive side of abduction. Thus, it is necessary to study the roleof perception in abductive reasoning (3). A furtherproblem is the question whether there is arelationship between abduction and Peirce''s concept of``theorematic reasoning'''' in mathematics (4). Both forms of reasoning could be connected, because both arebased on perception. The last problem concerns therole of instincts in explaining the success ofabductive reasoning in science, and the question whether the concept of instinct might be replaced bymethods of inquiry (5). (shrink)
Signs do not only “represent” something for somebody, as Peirce’s definition goes, but also “mediate” relations between us and our world, including ourselves, as has been elaborated by Vygotsky. We call the first the representational function of a sign and the second the epistemological function since in using signs we make distinctions, specify objects and relations, structure our observations, and organize societal and cognitive activity. The goal of this paper is, on the one hand, to develop a model in which (...) both these functions appear as complementary and, on the other, to show that this complementarity is essential for the dynamics of scientific activity, causing a dialectical process of generating new epistemological and representational means. This will be demonstrated with an example of how two scientists with different background knowledge analyze educational data collaboratively. (shrink)
On rationalist infallibilism, a wide range of both (i) analytic and (ii) synthetic a priori propositions can be infallibly justified, i.e., justified in a way that is truth-entailing. In this paper, I examine the second thesis of rationalist infallibilism, what might be called ‘synthetic a priori infallibilism’. Exploring the seemingly only potentially plausible species of synthetic a priori infallibility, I reject the infallible justification of so-called self-justifying propositions.
Minimalism is currently the received deflationary theory of truth. On minimalism, truth is a transparent concept and a deflated property of truth bearers. In this paper, I situate minimalism within current deflationary debate about truth by contrasting it with its main alternative―the redundancy theory of truth. I also outline three of the primary challenges facing minimalism, its formulation, explanatory adequacy and stability, and draw some lessons for the soundness of its conception of truth.
Why do we formulate arguments? Usually, things such as persuading opponents, finding consensus, and justifying knowledge are listed as functions of arguments. But arguments can also be used to stimulate reflection on one’s own reasoning. Since this cognitive function of arguments should be important to improve the quality of people’s arguments and reasoning, for learning processes, for coping with “wicked problems,” and for the resolution of conflicts, it deserves to be studied in its own right. This contribution develops first steps (...) towards a theory of reflective argumentation. It provides a definition of reflective argumentation, justifies its importance, delineates it from other cognitive functions of argumentation in a new classification of argument functions, and it discusses how reflection on one’s own reasoning can be stimulated by arguments. (shrink)
This article examines differences in the research approaches of farmers and scientists and analyzes how these differences are related to the conditions under which both groups engage in experimental work. Theoretical considerations as well as practical experiences are presented to emphasize the great potential of farmer–researcher collaboration for rural innovation. In the first part of the article, the innovative power of farmer research and experimentation is acknowledged by presenting examples such as crop and animal breeding, development of new production systems, (...) farm equipment, and social innovations. Considering the respective comparative advantages of farmers and scientists, and inspired by theoretical concepts in the fields of knowledge management and innovation processes, we discuss five topics for optimizing the collaboration between farmers and scientists in the field of technological innovation: user orientation, decentralization, informal modes of experimentation, externalization of tacit knowledge, and economic considerations. A better understanding of such issues could help researchers to define their own role in the research process, acknowledge the strengths and weaknesses of their own and farmers’ research approaches, overcome communication gaps, and find creative solutions for problems that typically occur in the process of participatory technology development. (shrink)
The possible-worlds analysis of propositions identifies a proposition with the set of possible worlds where it is true. This analysis has the hitherto unnoticed consequence that a proposition depends for its existence on the existence of every proposition that entails it. This peculiar consequence places the possible-worlds analysis in conflict with the conjunction of two compelling theses. One thesis is that a phrase of the form ‘the proposition that S’ is a rigid designator. The other thesis is that a proposition (...) which is directly about an object – a singular proposition – depends for its existence on the existence of the object. I defend these theses and conclude that the cost of the possible-worlds analysis is prohibitively high. (shrink)
This volume represents an important contribution to Peirce’s work in mathematics and formal logic. An internationally recognized group of scholars explores and extends understandings of Peirce’s most advanced work. The stimulating depth and originality of Peirce’s thought and the continuing relevance of his ideas are brought out by this major book.
Had more philosophers of science come from chemistry, their thinking would have been different. I begin by looking at a typical chemical paper, in which making something is the leitmotif, and conjecture/refutation is pretty much irrelevant. What in fact might have been, might be, different? The realism of chemists is reinforced by their remarkable ability to transform matter; they buy into reductionism where it serves them, but make no real use of it. Incommensurability is taken without a blink, and actually (...) serves. The preeminence of synthesis in chemistry could have led philosophers of science to take more seriously questions of aesthetics within science, and to find a place in aesthetics for utility. The necessary motion twixt macroscopic and microscopic views of matter in modern chemistry leads to the coexistence of symbolic and iconic representations. And in another way to the deliberate, creative violation of categories. (shrink)
The aim of this paper is to define a notion of supervenience which can adequately describe the systematic dependence of extrinsic as well as of intrinsic higher-level properties on base-level features. We argue that none of the standard notions of supervenience—the concepts of weak, strong and global supervenience—fulfil this function. The concept of regional supervenience, which is purported to improve on the standard conceptions, turns out to be problematic as well. As a new approach, we develop the notion of property-dependent (...) supervenience. This notion is founded on a criterion of relevance adapting the supervenience base to the considered higher-level properties in a specific way, such that only features which are relevant to the instantiation of the higher-level properties under consideration are taken into account. (shrink)
This essay is the first attempt to compare Reinhart Koselleck's Historik with Hannah Arendt's political anthropology and her critique of the modern concept of history. Koselleck is well-known for his work on conceptual history as well as for his theory of historical time. It is my contention that these different projects are bound together by Koselleck's Historik, that is, his theory of possible histories. This can be shown through an examination of his writings from Critique and Crisis to his final (...) essays on historical anthropology, most of which have not yet been translated into English. Conversely, Arendt's political theory has in recent years been the subject of numerous interpretations that do not take into account her views about history. By comparing the anthropological categories found in Koselleck's Historik with Arendt's political anthropology, I identify similar intellectual lineages in them as well as shared political sentiments, in particular the anti-totalitarian impulse of the postwar era. More importantly, Koselleck's theory of the preconditions of possible histories and Arendt's theory of the preconditions of the political, I argue, transcend these lineages and sentiments by providing essential categories for the analysis of historical experience. (shrink)
The primary goal of this chapter is to present a new method—called Logical Argument Mapping —for the analysis of framing processes as they occur in any communication, but especially in conflicts. I start with a distinction between boundary setting, meaning construction, and sensemaking as three forms or aspects of framing, and argue that crucial for the resolution of frame-based controversies is our ability to deal with those “webs” of mutually supporting beliefs that determine sensemaking processes. Since any analysis of framing (...) in conflicts and communication is itself influenced by sensemaking—there is no “frame-neutrality”—the main problem for an analyst is to cope with his or her own cognitive limitations. LAM offers a solution to this problem. The method will be exemplified with an analysis of two conflicting interpretations of how the international community should deal with Hamas after its election victory in 2006. (shrink)
The superassertability theory of truth, inspired by Crispin Wright (1992, 2003), holds that a statement is true if and only if it is superassertable in the following sense: it possesses warrant that cannot be defeated by any improvement of our information. While initially promising, the superassertability theory of truth is vulnerable to a persistent difficulty highlighted by James Van Cleve (1996) and Terrence Horgan (1995) but not properly fleshed out: it is formally illegitimate in a similar sense that unsophisticated epistemic (...) theories of truth are widely acknowledged to be. Sustained analysis reveals that the unrestricted formal legitimacy argument is firmly grounded in first person conceivability evidence. (shrink)
Linguistic competence, in general terms, involves the ability to learn, understand, and speak a language. The nativist view in the philosophy of linguistics holds that the principal foundation of linguistic competence is an innate faculty of linguistic cognition. In this paper, close scrutiny is given to nativism's fundamental commitments in the area of metaphysics. In the course of this exploration it is argued that any minimally defensible variety of nativism is, for better or worse, married to two theses: linguistic competence (...) is grounded in a faculty of linguistic cognition that is (i) embodied and (ii) whose operating rules are represented in the brains of human language users. (shrink)
In the 1890s Ludwig Mach employed photography for visualizing streamlines in the emerging field of aerodynamic research. Étienne-Jules Marey developed a similar approach at the turn of the century. The two projects can be related to a number of current discussions on the history of scientific photography. The case of Ludwig Mach demonstrates how the collection of numerical data became both the subject and the challenge of a line of research intimately linked to the capacities of photography. At the end (...) of the nineteenth century, the particular potential of scientific photography is very often defined by comparison with the limited power of the human eye. In contrast, the example of streamline photography underlines that the requirements of the research context are critical for successfully employing photography. Marey’s studies point to a tension between his characterization of chronophotography as a method for analyzing the elementary units of processes in nature on the one hand and the necessary summation of single moments in time in his recordings of streamlines on the other. What Marey usually qualified as a cumbersome confusion was here the prerequisite of observation. The ’philosophy in machines’ ultimately limited the success of streamline photography; it aided in debates about qualitative matters, but could hardly provide what most interested scientists and engineers: reliable numbers. (shrink)
It seems that every singular proposition implies that the object it is singular with respect to exists. It also seems that some propositions are true with respect to possible worlds in which they do not exist. The puzzle is that it can be argued that there is contradiction between these two principles. In this paper, I explain the puzzle and consider some of the ways one might attempt to resolve it. The puzzle is important because it has implications concerning the (...) way we think about the relationship between a proposition and the claim that the proposition is true. (shrink)
This book studies medieval theories of angelology insofar as they made groundbreaking contributions to medieval philosophy. -/- The discussion of angels, made famous by the humanist caricature of ‘how many angels can dance on the head of a pin’, was nevertheless a crucial one in medieval philosophical debates. All scholastic masters pronounced themselves on angelology, if only in their Sentence commentaries. The questions concerning angelic cognition, speech, free decision, movement, etc. were springboards for profound philosophical discussions that have to do (...) with anthropology and metaphysics no less than with angelology. Angels qua separate substances were of central importance in medieval metaphysics (with questions on universal hylomorphism, the esse- essentia composition of creatures, and those regarding individuation of material and immaterial substances). The doctrine of angels has not been the subject of much study in the history of medieval thought, and the volume fills an important gap in the literature. The chapters offer a well-rounded, if not encyclopedic discussion in the chronological or doctrinal sense. They cover the history of debate from Augustine and Pseudo-Dionysius until the later middle ages, but instead of an author-by-author approach, focus rather on seminal ideas with demonstrable relevance to “secular” and modern philosophical concerns. (shrink)