The raison d’être of this article is that many a spry-eyed analyst of the works in intelligent computing and robotics fail to see the essential concerning applications development, that of expressing their ultimate goal. Alternatively, they fail to state it suitably for the lesser-informed public eye. The author does not claim to be able to remedy this. Instead, the visionary investigation offered couples learning and computing with other related fields as part of a larger spectre to fully simulate people in (...) their embodied image. For the first time, the social roles attributed to the technical objects produced are questioned, and so with a humorous illustration. (shrink)
In the present enterprise we take a look at the meaning of Autonomy, how the word has been employed and some of the consequences of its use in the sciences of the artificial. Could and should robots really be autonomous entities? Over and beyond this, we use concepts from the philosophy of mind to spur on enquiry into the very essence of human autonomy. We believe our initiative, as does Dennett's life-long research, sheds light upon the problems of robot design (...) with respect to their relation with humans. (shrink)
Discussion about the application of scientific knowledge in robotics in order to build people helpers is widespread. The issue herein addressed is philosophically poignant, that of robots that are “people”. It is currently popular to speak about robots and the image of Man. Behind this lurks the dialogical mind and the questions about the significance of an artificial version of it. Without intending to defend or refute the discourse in favour of ‘recreating’ Man, a lesser familiar question is brought forth: (...) “and what if we were capable of creating a very convincible replica of man (constructing a robot-person), what would the consequences of this be and would we be satisfied with such technology?” Thorny topic; it questions the entire knowledge foundation upon which strong AI/Robotics is positioned. The author argues for improved monitoring of technological progress and thus favours implementing weaker techniques. (shrink)
Erroneously attributing propositional attitudes (desires, beliefs...) to computational artefacts has become internationally commonplace in the public arena, especially amongst the new generation of non-initiated users. Technology for rendering machines “user-friendly” is often inspired by interpersonal human communication. This calls forth designers to conceptualise a major component of human intelligence: the sense ofcommunicability, and its logical consequences. The inherentincommunicability of machines subsequently causes a shift in design strategy. Though cataloguing components of bouts between person and machine with Speech Act Theory has (...) been popular, I will endeavour to present thesine qua non for their insertion into a larger unit of discourse — their societal embodiment. I shall argue that the so-called “intelligence” of the artificial should to be seenas a purposeful act that is socially generated, because it comes of Man,for Man. Designership will provide the forum for evolving user requirements and interface renewal. (shrink)
In this article, I establish a theory of knowledge approach for evaluating the use of computers for educational purposes at the university. In so doing, I trace part of the history of the “enabling factor” of Artificial Intelligence in this sector, an important element that has been integrated into everyday learning environments. The result of my reflection is a dialogical structure, directly inspired by past technology assessment research, which illustrates the conceptual advancement of researchers in the field of learning technologies. (...) The notions covered have implications in future policy-related discourse with regards to education. (shrink)
Empirical evidence of self-deception's propositional duality is not sought; philosophically relevant links between propositions proper and mind are explored instead. Speech in unison ably indicates the social grounding of such attitudinal structures. An extra-theoretical eye – with regard to cognitivism – is cast on a case of “illusory communication.” The reinforcing of lexical analysis shows Mele's approach to be in need of non-ego concepts, wherefore it lacks soundness with respect to reference.
Many have bowed before the recently acquired powers of ‘new technologies’. However, in the shift from tekhnē to tekhnologia, it seems we have lost human values. These values are communicative in nature as technological progress has placed barriers like distance, web pages and ‘miscellaneous extras’ between individuals. Certain values, like the interpersonal pleasures of rendering service, have been lost as their domain of predilection has for many become fully commercially oriented, dominated by the cadence of profitability. Though the popular cultures (...) of the artificial have surged forth to deliver us from the twentieth century, they have enabled some very superfluous dreaming—Man has succumbed to the Godly role of simulating himself and creating other beings. Communication is replaced by machines, services are rendered via many automated devices, procreation has entered the public sphere, robots and entertainment agents educate our youth and mesmerising screen-integrating ‘forms of intelligence’ even think for us. As such, this so-called culture threatens the very values Man constructed in the nineteenth and twentieth centuries to guide himself into the future. But what if the phenomena mentioned just reflect our new values? The author presents an investigation into this cultural shift, its impact on human practices with regards the mind and the body and evokes some pros and cons of generally accepting the ‘Culture of the Artificial’. (shrink)
The standard model for mereotopological structures are Boolean subalgebras of the complete Boolean algebra of regular closed subsets of a nonempty connected regular T 0 topological space with an additional "contact relation" C defined by xCy x ØA (possibly) more general class of models is provided by the Region Connection Calculus (RCC) of Randell et al. We show that the basic operations of the relational calculus on a "contact relation" generate at least 25 relations in any model of the RCC, (...) and hence, in any standard model of mereotopology. It follows that the expressiveness of the RCC in relational logic is much greater than the original 8 RCC base relations might suggest. We also interpret these 25 relations in the the standard model of the collection of regular open sets in the two-dimensional Euclidean plane. (shrink)
In this article, we present research in the making of a collective work environment within the framework of a distance education course. We base our theoretical and methodological standpoints on examples of dialogical discourses recorded within the framework of this CSCL system called Symba. In fact, the results of previous research lead us to rethink our vision of the study of collaborative moments between participants in a computer-supported human learning environment that proposes several communication tools. Redefining the methodological process aiming (...) at finding and understanding these rich learning moments is also necessary. We intend to describe “socio-technical” instances during which these collaboration phases appear. More generally speaking, our aim is to draw up both new theoretical and methodological perspectives that would be reusable in CSCL environments; in view of the nature of these two perspectives, and the diversity of the domain knowledge (sociology, cognitivism, linguistics, philosophy, statistics, etc.) brought to bear in the study of the environment in question, our approach constitutes a trans-disciplinary reassessment of the uses of the communication tools—and the study thereof—proposed. (shrink)
In his paper Bare Particulars, T. Sider claims that one of the most plausible candidates for bare particulars are spacetime points. The aim of this paper is to shed light on Sider’s reasoning and its consequences. There are three concepts of spacetime points that allow their identification with bare particulars. One of them, Moderate structural realism, is considered to be the most adequate due its appropriate approach to spacetime metric and moderate view of mereological simples. However, it pushes the Substratum (...) theory to dismiss primitive thisness as the only identity condition for bare particulars, but the paper argues that such elimination is a legitimate step. (shrink)
The goal of this paper is an interpretation of Aristotle's modal syllogistics closely oriented on the text using the resources of modern modal predicate logic. Modern predicate logic was successfully able to interpret Aristotle's assertoric syllogistics uniformly , that is, with one formula for universal premises. A corresponding uniform interpretation of modal syllogistics by means of modal predicate logic is not possible. This thesis does not imply that a uniform view is abandoned. However, it replaces the simple unity of the (...) assertoric by the complex unity of the modal. The complexity results from the fact that though one formula for universal premises is used as the basis, it must be moderated if the text requires . Aristotle introduces his modal syllogistics by expanding his assertoric syllogistics with an axiom that links two apodictic premises to yield a single apodictic sentence . He thus defines a regular modern modal logic. By means of the regular modal logic that is thus defined, he is able to reduce the purely apodictic syllogistics to assertoric syllogistics. However, he goes beyond this simple structure when he looks at complicated inferences. In order to be able to link not only premises of the same modality, but also premises with different modalities, he introduces a second axiom, the T-axiom, which infers from necessity to reality or - equivalently - from reality to possibility. Together, the two axioms, the axiom of regularity and the T-axiom, define a regular T-logic. It plays an important role in modern logic. In order to be able to account for modal syllogistics adequately as a whole, another modern axiom is also required, the so-called B-axiom. It is very difficult to decide whether Aristotle had the B-axiom. The two last named axioms are sufficient to achieve the required contextual moderation of the basic formula for universal propositions. (shrink)
This paper presents an experimental study of common consequence effects in binary choice, willingness-to-pay (WTP) elicitation, and willingness-to-accept (WTA) elicitation. We find strong evidence in favor of the fanning out hypothesis (Machina, Econometrica 50:277–323, 1982) for both WTP and WTA. In contrast, the choice data do not show a clear pattern of violations in the absence of certainty effects. Our results underline the relevance of differences between pricing and choice tasks, and their implications for models of decision making under risk.
The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object’s “importance” for a scene, which in turn drives attention. Here we operationalize importance by the probability that an (...) observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (“common”/“important”) or a rarely named (“rare”/“unimportant”) object, track the observers’ eye movements during scene viewing and ask them to provide keywords describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object’s importance suggests an analogy to the effects of word frequency on landing positions in reading. (shrink)
In “Against Arguments from Reference” (Mallon et al., 2009), Ron Mallon, Edouard Machery, Shaun Nichols, and Stephen Stich (hereafter, MMNS) argue that recent experiments concerning reference undermine various philosophical arguments that presuppose the correctness of the causal-historical theory of reference. We will argue three things in reply. First, the experiments in question—concerning Kripke’s Gödel/Schmidt example—don’t really speak to the dispute between descriptivism and the causal-historical theory; though the two theories are empirically testable, we need to look at quite different (...) data than MMNS do to decide between them. Second, the Gödel/Schmidt example plays a different, and much smaller, role in Kripke’s argument for the causal-historical theory than MMNS assume. Finally, and relatedly, even if Kripke is wrong about the Gödel/Schmidt example—indeed, even if the causal-historical theory is not the correct theory of names for some human languages—that does not, contrary to MMNS’s claim, undermine uses of the causalhistorical theory in philosophical research projects. (shrink)
As pulsations and circulating currents are caused by the activity of the sun, this short survey begins with the road to recognition of solar influences on terrestrial magnetism, particularly of the hypotheses of Balfour Stewart and the two treatises of Arthur Schuster about the daily variations. In meteorology and geomagnetism photographic self-registering apparatuses were early developed in Greenwich and Kew. E. Mascart and M. Eschenhagen continued this line. With the help of his FeinregistriergerÃ¤t (sensitive magnetograph) Eschenhagen could precisely record pulsations (...) for the first time. Through short-time simultaneous observations suggested by him the course of a terrestrial magnetic disturbance was pursued. This disturbance was identified by A. Schmidt in 1899 as a moving circulating current in the upper strata of the atmosphere. (shrink)
The aim of the present investigation was to describe and to classify significant ethical problems encountered by the members of the staff during the daily clinical work at a hospital medical department. A set of definitions was prepared for the purpose, including the definition of a 'significant ethical problem'. During a three month period 426 inpatients and 173 outpatients were admitted. Significant ethical problems were encountered during the management of 106 in-patients (25 per cent) and 9 out-patients (5 per cent). (...) No significant difference was found between the frequency of ethical problems in female and male patients, but a positive correlation was noted between the number of problems and the patients' age. The problem types were classified according to a problem list. The results of this investigation suggest that greater attention must be paid to discussions about ethical problems among doctors and other categories of health personnel and that, among others, medical students ought to be taught the analysis of ethical problems. (shrink)