A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. (...) We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines. (shrink)
Charles Peirce's diagrammatic logic — the Existential Graphs — is presented as a tool for illuminating how we know necessity, in answer to Benacerraf's famous challenge that most ‘semantics for mathematics’ do not ‘fit an acceptable epistemology’. It is suggested that necessary reasoning is in essence a recognition that a certain structure has the particular structure that it has. This means that, contra Hume and his contemporary heirs, necessity is observable. One just needs to pay attention, not merely to individual (...) things but to how those things are related in larger structures, certain aspects of which relations force certain other aspects to be a certain way. (shrink)
This article is concerned with developing a philosophical approach to a number of significant changes to academic publishing, and specifically the global journal knowledge system wrought by a range of new digital technologies that herald the third age of the journal as an electronic, interactive and mixed-media form of scientific communication. The paper emerges from an Editors' Collective, a small New Zealand-based organisation comprised of editors and reviewers of academic journals mostly in the fields of education and philosophy. The paper (...) is the result of a collective writing process. (shrink)
In many diagrams one seems to perceive necessity – one sees not only that something is so, but that it must be so. That conflicts with a certain empiricism largely taken for granted in contemporary philosophy, which believes perception is not capable of such feats. The reason for this belief is often thought well-summarized in Hume's maxim: ‘there are no necessary connections between distinct existences’. It is also thought that even if there were such necessities, perception is too passive or (...) localized a faculty to register them. We defend the perception of necessity against such Humeanism, drawing on examples from mathematics. (shrink)
Robert Brandom’s expressivism argues that not all semantic content may be made fully explicit. This view connects in interesting ways with recent movements in philosophy of mathematics and logic (e.g. Brown, Shin, Giaquinto) to take diagrams seriously - as more than a mere “heuristic aid” to proof, but either proofs themselves, or irreducible components of such. However what exactly is a diagram in logic? Does this constitute a semiotic natural kind? The paper will argue that such a natural kind does (...) exist in Charles Peirce’s conception of iconic signs, but that fully understood, logical diagrams involve a structured array of normative reasoning practices, as well as just a “picture on a page”. (shrink)
Much mainstream analytic epistemology is built around a sceptical treatment of modality which descends from Hume. The roots of this scepticism are argued to lie in Hume’s (nominalist) theory of perception, which is excavated, studied and compared with the very different (realist) theory of perception developed by Peirce. It is argued that Peirce’s theory not only enables a considerably more nuanced and effective epistemology, it also (unlike Hume’s theory) does justice to what happens when we appreciate a proof in mathematics.
This entry explores Charles Peirce's account of truth in terms of the end or ‘limit’ of inquiry. This account is distinct from – and arguably more objectivist than – views of truth found in other pragmatists such as James and Rorty. The roots of the account in mathematical concepts is explored, and it is defended from objections that it is (i) incoherent, (ii) in its faith in convergence, too realist and (iii) in its ‘internal realism’, not realist enough.
Neopragmatism is currently a burgeoning area of philosophical research, and Huw Price is positioned as a key heir of its originary figure Richard Rorty.2 In the late 1960s and early ‘70s, Rorty famously burst onto an Anglo-American philosophical scene largely dominated by still-positivistic analytic philosophy3 and initiated a great revival for pragmatism. This intervention provoked a significant counter-reaction.4 Rorty’s ideas were widely viewed as blithely disregarding important issues such as whether reality exists, and if so what is its nature, and (...) erasing the possibility of principled distinctions between ground-breaking scientific inquiry, great works of Western Philosophy, and any other “kind of writing”... (shrink)
Charles Peirce famously divided all signs into icons, indices and symbols. The past few decades have seen mainstream analytic philosophy broaden its traditional focus on symbols to recognise the so-called essential indexical. Can the moral now be extended to icons? Is there an “essential icon”? And if so, what exactly would be essential about it? It is argued that there is and it consists in logical form. Danielle Macbeth’s radical new “expressivist” interpretation of Frege’s logic and Charles Peirce’s existential graphs (...) are mobilized in support of this claim. (shrink)
This article explores how Robert Brandom's original "inferentialist" philosophical framework should be positioned with respect to the classical pragmatist tradition. It is argued that Charles Peirce's original attack on the use of "intuition" in nineteenth-century philosophy of mind is in fact a form of inferentialism, and thus an antecedent relatively unexplored by Brandom in his otherwise comprehensive and illuminating "tales of the mighty dead." However, whereas Brandom stops short at a merely "strong" inferentialism, which admits some non-inferential mental content , (...) Peirce embraces a total, that is, "hyper-," inferentialism. Some consequences of this difference are explored, and Peirce's more thoroughgoing position is defended. (shrink)
The somewhat old-fashioned concept of philosophical categories is revived and put to work in automated ontology building. We describe a project harvesting knowledge from Wikipedia’s category network in which the principled ontological structure of Cyc was leveraged to furnish an extra layer of accuracy-checking over and above more usual corrections which draw on automated measures of semantic relatedness.
We discuss the one?many problem as it appears in the Philebus and find that it is not restricted to the usually understood problem about the identity of universals across particulars that instantiate them (the Hylomorphic Dispersal Problem). In fact some of the most interesting aspects of the problem occur purely with respect to the relationship between Forms. We argue that contemporary metaphysicians may draw from the Philebus at least three different one?many relationships between universals themselves: instantiation, subkind and part, and (...) thereby construct three new ?problems of the one and the many? (an Eidetic Dispersal Problem, a Genus?Species Problem, and an Eidetic Combination Problem), which are as problematic as the version generally discussed. We then argue that this taxonomy sheds new and interesting light on certain discussions of higher-order universals in recent Australian analytic philosophy. (shrink)
Feeding difficulties in older patients who are suffering from dementia present problems with balancing conflicting ethical principles. They have been considered by several writers in recent years, and the views of nursing and care staff have been studied in different contexts. The present study used focus groups to explore the way in which nursing and care staff in a National Health Service trust deal with conflict between ethical principles in this area. Three focus groups were convened, one each from the (...) staff of three wards caring for patients with dementia. Case histories were discussed and transcripts analysed. It emerged that staff were aware of making fine judgements of utility concerning the spectrum of feeding methods available. Informants gave some weight to the principle of autonomy, but sought to balance that against their commitment to care. In explaining their perspectives, informants gave more weight to personal attitudes and trust culture than to professional ethics. (shrink)
Much discussion of meaning by philosophers over the last 300 years has been predicated on a Cartesian first-person authority (i.e. “infallibilism”) with respect to what one’s terms mean. However this has problems making sense of the way the meanings of scientific terms develop, an increase in scientific knowledge over and above scientists’ ability to quantify over new entities. Although a recent conspicuous embrace of rigid designation has broken up traditional meaning-infallibilism to some extent, this new dimension to the meaning of (...) terms such as “water” is yet to receive a principled epistemological undergirding (beyond the deliverances of “intuition” with respect to certain somewhat unusual possible worlds). Charles Peirce’s distinctive, naturalistic philosophy of language is mined to provide a more thoroughly fallibilist, and thus more realist, approach to meaning, with the requisite epistemology. Both his pragmatism and his triadic account of representation, it is argued, produce an original approach to meaning, analysing it in processual rather than objectual terms, and opening a distinction between “meaning for us”, the meaning a term has at any given time for any given community and “meaning simpliciter”. the way use of a given term develops over time (often due to a posteriori input from the world which is unable to be anticipated in advance). This account provocatively undermines a certain distinction between “semantics” and “ontology” which is often taken for granted in discussions of realism. (shrink)
This book, officially a contribution to the subject area of Charles Peirce’s semiotics, deserves a wider readership, including philosophers. Its subject matter is what might be termed the great question of how signification is brought about (what Peirce called the ‘riddle of the Sphinx’, who in Emerson’s poem famously asked, ‘Who taught thee me to name?’), and also Peirce’s answer to the question (what Peirce himself called his ‘guess at the riddle’, and Freadman calls his ‘sign hypothesis’).
This paper offers an expressivist account of logical form, arguing that in order to fully understand it one must examine what valid arguments make us do (or: what Achilles does and the Tortoise doesn’t, in Carroll’s famed fable). It introduces Charles Peirce’s distinction between symbols, indices and icons as three different kinds of signification whereby the sign picks out its object by learned convention, by unmediated indication, and by resemblance respectively. It is then argued that logical form is represented by (...) the third, iconic, kind of sign. It is noted that icons uniquely enjoy partial identity between sign and object, and argued that this holds the key to Carroll’s puzzle. Finally, from this examination of sign-types metaphysical morals are drawn: that the traditional foes metaphysical realism and conventionalism constitute a false dichotomy, and that reality contains intriguingly inference-binding structures. (shrink)
Would be fairer to call Peirce’s philosophy of language “extensionalist” or “intensionalist”? The extensionalisms of Carnap and Quine are examined, and Peirce’s view is found to be prima facie similar, except for his commitment to the importance of “hypostatic abstraction”. Rather than dismissing this form of abstraction (famously derided by Molière) as useless scholasticism, Peirce argues that it represents a crucial (though largely unnoticed) step in much working inference. This, it is argued, allows Peirce to transcend the extensionalist-intensionalist dichotomy itself, (...) through his unique triadic analysis of reference and meaning, by transcending the distinction between (as Quine put it) “things” and “attributes”. (shrink)
Fourteen philosophers share their experience teaching Peirce to undergraduates in a variety of settings and a variety of courses. The latter include introductory philosophy courses as well as upper-level courses in American philosophy, philosophy of religion, logic, philosophy of science, medieval philosophy, semiotics, metaphysics, etc., and even an upper-level course devoted entirely to Peirce. The project originates in a session devoted to teaching Peirce held at the 2007 annual meeting of the Society for the Advancement of American Philosophy. The session, (...) organized by James Campbell and Richard Hart, was co-sponsored by the American Association of Philosophy Teachers. (shrink)
This paper begins by outlining Hume's understanding of perception according to which ideas are copies of impressions, which are thought to constitute a foundational confrontation with reality. This understanding is contrasted with Peirce's theory of perception according to which percepts give rise to perceptual judgements, but perceptual judgements are not a copy but an index (or 'true symptom' - just as a weather-cock indicates the direction of the wind) of the percept. Percept and perceptual judgement are thereby able to mutually (...) inform and correct one another in rich ways, as the perceiver develops mental habits of interpreting their surroundings. (shrink)
Argument-forms exist which are valid over finite but not infinite domains. Despite understanding of this by formal logicians, philosophers can be observed treating as valid arguments which are in fact invalid over infinite domains. In support of this claim I will first present an argument against the classical pragmatist theory of truth by Mark Johnston. Then, more ambitiously, I will suggest the fallacy lurks in certain arguments for physicalism taken for granted by many philosophers today.
Charles S. Peirce’s semiotics uniquely divides signs into: i) symbols, which pick out their objects by arbitrary convention or habit, ii) indices, which pick out their objects by unmediated ‘pointing’, and iii) icons, which pick out their objects by resembling them (as Peirce put it: an icon’s parts are related in the same way that the objects represented by those parts are themselves related). Thus representing structure is one of the icon’s greatest strengths. It is argued that the implications of (...) scaffolding education iconically are profound: for providing learners with a navigable road-map of a subject matter, for enabling them to see further connections of their own in what is taught, and for supporting meaningful active learning. Potential objections that iconic teaching is excessively entertaining and overly susceptible to misleading rhetorical manipulation are addressed. (shrink)
Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, (...) facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced. (shrink)
The word “hacker” has an interesting double meaning: one vastly more widespread connotation of technological mischief, even criminality, and an original meaning amongst the tech savvy as a term of highest approbation. Both meanings, however, share the idea that hackers possess a superior ability to manipulate technology according to their will (and, as with God, this superior ability to exercise will is a source of both mystifying admiration and fear). This book mainly concerns itself with the former meaning. To Thomas (...) this simultaneously mystified and vilified, elusive set of individuals exemplifies “the performance of technology” xx), showing the way in which “the cultural, social and political history of the computer...is fraught with complexity and contradictions” ix). In fact, he claims that hacking is more a cultural than technological phenomenon, citing Heidegger’s, “[t]he essence of technology is not anything technological” (56). (shrink)
Peirce wrote that Hume’s argument against miracles (which is generally liked by twentieth century philosophers for its antireligious conclusion) "completely misunderstood the true nature of" ’abduction’. This paper argues that if Hume’s argumentative strategy were seriously used in all situations (not just those in which we seek to "banish superstition"), it would deliver a choking epistemological conservatism. It suggests that some morals for contemporary naturalistic philosophy may be drawn from Peirce’s argument against Hume.
Wittgenstein's discussion of rule-following is widely regarded to have identified what Kripke called "the most radical and original sceptical problem that philosophy has seen to date". But does it? This paper examines the problem in the light of Charles Peirce's distinctive "scientific hierarchy". Peirce identifies a phenomenological inquiry which is prior to both logic and metaphysics, whose role is to identify the most fundamental philosophical categories. His third category, particularly salient in this context, pertains to general predication. Rule-following scepticism, the (...) paper suggests, results from running together two questions: "How is it that I can project rules?", and, "What is it for a given usage of a rule to be right?". In Peircean terms the former question, concerning the irreducibility of general predication (to singular reference), must be answered in phenomenology, while the latter, concerning the difference between true and false predication, is answered in logic. A failure to appreciate this distinction, it is argued, has led philosophers to focus exclusively on Wittgenstein's famous public account of rule-following rightness, thus overlooking a private, phenomenological dimension to Wittgenstein's remarks on following a rule which gives the lie to Kripke's reading of him as a sceptic. (shrink)
Through staged photographs in which she herself is often the lead actor or through appropriation of historical photographs, contemporary African American artist Carrie Mae Weems deconstructs the shaming of the black female body in American visual culture and offers counter-hegemonic images of black female beauty. The mirror has been foundational in Western theories of subjectivity and discussions of beauty. In the artworks I analyze in this article, Weems tactically employs the mirror to engage the topos of shame in order to (...) reject it as a way of seeing the self and to offer a new way of lovingly seeing the self. I use the work of Kelly Oliver, Helen Block Lewis, and bell hooks to articulate the relationships among the mirror, shame, and black female subjectivity in Weems's work. Weems's subjects often reckon with what Oliver calls “social melancholy” as they experience shame while standing before the mirror. However, Weems also shows that by looking again—a critical strategy I explain using Oliver's model of “the loving eye”—her subjects can use the mirror as a corrective to the social shaming gaze and make it a stage for establishing black female subjectivity, a gaze of self-love, and beauty. (shrink)
The shape of Christian Education in the United States has shifted as new communication media have come to the fore, interacting with the overarching purposes and content of Christian Education. As we begin to ask how computer technologies and the Internet may affect Christian Education, it is helpful to look back at the ways communication media have affected Christian Education over the past 200 years.
This paper takes indexicality as a case-study for critical examination of the distinction between semantics and pragmatics as currently conceived in mainstream philosophy of language. Both a ‘pre-indexical’ and ‘post-indexical’ analytic formal semantics are examined and found wanting, and instead an argument is mounted for a ‘properly pragmatist pragmatics’, according to which we do not work out what signs mean in some abstract overall sense and then work out to what use they are being put; rather, we must understand to (...) what use signs are being put in order to work out what they mean. (shrink)
A commentary on a current paper by Aaron Sloman. Sloman argues that in order to make progress in AI, consciousness, "should be replaced by more precise and varied architecture-based concepts better suited to specify what needs to be explained by scientific theories". This original vision of philosophical inquiry as mapping out 'design-spaces' for a contested concept seeks to achieve a holistic, synthetic understanding of what possibilities such spaces embody. It therefore does not reduce to either "relations of ideas" or "matters (...) of fact" in Hume's famous dichotomy. It is also interestingly opposite to a current vogue for 'experimental philosophy'. (shrink)
As a result of recent legislative developments and greater ease of accessibility, the Human Resources Manager (HRM) faces the challenge of not only maintaining records but also that of protecting employees from misuse of personal information contained in their individual personnel files. The widespread use of computers for maintaining employee records has resulted in new ethical dimensions and/or challenges for the HRM. Serious questions regarding accessibility to and dissemination of such personal information now confront the HRM. Unless policies are developed (...) by organizations for dealing with such questions, eventually government will mandate such policies in order to protect employee rights. (shrink)
Considering that negative intergroup emotions can hinder conflict resolution, we proposed integrative emotion regulation as possibly predicting conciliatory policies towards outgroups in violent conflict. Two studies examined Jewish Israelis’ self-reported IER, empathy, liberal attitudes, and support for humanitarian aid to Palestinians in Gaza. Study 1 found that unlike reappraisal Jewish Israelis’ ability to explore emotions promoted concern for others’ emotions, which in turn predicted support for humanitarian aid. Study 2 replicated this mediation model, additionally confirming that liberal attitudes moderated the (...) relation between IER and support for humanitarian aid. Thus, IER linked more strongly with humanitarian support when the commitment for liberal egalitarian beliefs was high. Preliminary results hold important theoretical and practical implications regarding the potential to empathise with outgroup members in intractable conflicts. (shrink)
Integration of ontologies begins with establishing mappings between their concept entries. We map categories from the largest manually-built ontology, Cyc, onto Wikipedia articles describing corresponding concepts. Our method draws both on Wikipedia’s rich but chaotic hyperlink structure and Cyc’s carefully defined taxonomic and common-sense knowledge. On 9,333 manual alignments by one person, we achieve an F-measure of 90%; on 100 alignments by six human subjects the average agreement of the method with the subject is close to their agreement with each (...) other. We cover 62.8% of Cyc categories relating to common-sense knowledge and discuss what further information might be added to Cyc given this substantial new alignment. (shrink)
Abstract This paper contrasts the scholastic realisms of David Armstrong and Charles Peirce. It is argued that the so-called ?problem of universals? is not a problem in pure ontology (concerning whether universals exist) as Armstrong construes it to be. Rather, it extends to issues concerning which predicates should be applied where, issues which Armstrong sets aside under the label of ?semantics?, and which from a Peircean perspective encompass even the fundamentals of scientific methodology. It is argued that Peirce's scholastic realism (...) not only presents a more nuanced ontology (distinguishing the existent from the real) but also provides more of a sense of why realism should be a position worth fighting for. (shrink)
In order to achieve genuine web intelligence, building some kind of large general machine-readable conceptual scheme (i.e. ontology) seems inescapable. Yet the past 20 years have shown that manual ontology-building is not practicable. The recent explosion of free user-supplied knowledge on the Web has led to great strides in automatic ontology building, but quality-control is still a major issue. Ideally one should automatically build onto an already intelligent base. We suggest that the long-running Cyc project is able to assist here. (...) We describe methods used to add 35K new concepts mined from Wikipedia to collections in ResearchCyc entirely automatically. Evaluation with 22 human subjects shows high precision both for the new concepts’ categorization, and their assignment as individuals or collections. Most importantly we show how Cyc itself can be leveraged for ontological quality control by ‘feeding’ it assertions one by one, enabling it to reject those that contradict its other knowledge. (shrink)