Cerebellar Purkinje cells generate two distinct types of spikes, complex and simple spikes, both of which have conventionally been considered to be highly irregular, suggestive of certain types of stochastic processes as underlying mechanisms. Interestingly, however, the interspike interval structures of complex spikes have not been carefully studied so far. We showed in a previous study that simple spike trains are actually composed of regular patterns and single interspike intervals, a mixture that could not be explained by a simple rate-modulated (...) Poisson process. In the present study, we systematically investigated the interspike interval structures of separated complex and simple spike trains recorded in anaesthetized rats, and derived an appropriate stochastic model. We found that: (i) complex spike trains do not exhibit any serial correlations, so they can effectively be generated by a renewal process, (ii) the distribution of intervals between complex spikes exhibits two narrow bands, possibly caused by two oscillatory bands (0.5–1 and 4–8 Hz) in the input to Purkinje cells and (iii) the regularity of regular patterns and single interspike intervals in simple spike trains can be represented by gamma processes of orders, which themselves are drawn from gamma distributions, suggesting that multiple sources modulate the regularity of simple spike trains. (shrink)
The present research was designed to investigate the absolute and relative levels of ethical convictions of executive search consultants, or "headhunters", in regard of their search practices. Executive search consultants were defined as trained specialists who helped client organizations identify and evaluate the suitability of job candidates for top, senior, and middle-level management and executive positions. Despite frequent reports of unethical search practices in the media, results based on a sample of 184 headhunters and non-headhunter executives showed that headhunters were (...) inclined to adhere stringently to a selected set of ethical values, both in absolute terms and in comparison with the expectations of non-headhunter executives. The differences had implications not only for the integrity and continued existence of the headhunting profession, but also for the ethical development of new executive search consultants. Future research directions were suggested. (shrink)
Recent scandals allegedly linked to CEO compensation have brought executive compensation and perquisites to the forefront of debate about constraining executive compensation and reforming the associated corporate governance structure. We briefly describe the structure of executive compensation, and the agency theory framework that has commonly been used to conceptualize executives acting on behalf of shareholders. We detail some criticisms of executive compensation and associated ethical issues, and then discuss what previous research suggests are likely intended and unintended consequences of some (...) widely proposed executive compensation reforms. We explicitly discuss the following recommendations for reform: require greater independence of compensation committees, require executives to hold equity in the corporation, require greater disclosure of executive compensation, increase institutional investor involvement in corporate governance (including executive compensation), and require firms to expense stock options on their income statements. We provide a brief summary discussion of ethical issues related to executive compensation, and describe possible future research. (shrink)
In this paper, I aim to identify Peirce?s great contribution to logical diagrams and its limit.Peirce is the first person who believed that the same logical status can be given to diagrams as to symbolic systems.Even though this belief led him to invent his own graphical system, Existential Graphs, the success or failure of this system does not determine the value of Peirce?s general insights about logical diagrams.In order to make this point clear, I will show that Peirce?s revolutionary ideas (...) about diagrams not only overcame some important defects of Venn diagrams but opened a new horizon for logical diagrams.Finally, I will point out where Peirce?s new horizon for logical diagrams stopped and will claim that this limit is mainly responsible for the discrepancy between Peirce?s and others? estimates of his contribution to logical diagrams. (shrink)
In this study, we examined students' attitudes toward cheating and whether they would report instances of cheating they witnessed. Data were collected from three educational institutions in Singapore. A total of 518 students participated in the study. Findings suggest that students perceived cheating behaviors involving exam-related situations to be serious, whereas plagiarism was rated as less serious. Cheating in the form of not contributing one's fair share in a group project was also perceived as a serious form of academic misconduct, (...) although a majority of the students admitted having engaged in such behavior. With regard to the prevalence of academic cheating, our findings suggest that students are morally ambivalent about academic cheating and are rather tolerant of dishonesty among their peers. On the issue of whether cheating behaviors should be reported, our findings revealed that a majority of students chose to take the expedient measure of ignoring the problem rather than to blow the whistle on their peers. Implications of our findings are discussed. (shrink)
This paper reconstructs the Peircean interpretation of Kant's doctrine on the syntheticity of mathematics. Peirce correctly locates Kant's distinction in two different sources: Kant's lack of access to polyadic logic and, more interestingly, Kant's insight into the role of ingenious experiments required in theorem-proving. In this second respect, Kant's analytic/synthetic distinction is identical with the distinction Peirce discovered among types of mathematical reasoning. I contrast this Peircean theory with two other prominent views on Kant's syntheticity, i.e. the Russellian and the (...) Beckian views, and show how Peirce's interpretation of Kant solves the dilemma that each of these two views faces. I also show that Hintikka's criterion for Kant's synthetic judgments, i.e. a new individual introduced by the -instantiation rule, does not capture the most important characteristic of Peirce's theorematic reasoning, i.e. the process of choosing a correct individual. (shrink)
Parallelism has been drawn between modes of representation and problem-sloving processes: Diagrams are more useful for brainstorming while symbolic representation is more welcomed in a formal proof. The paper gets to the root of this clear-cut dualistic picture and argues that the strength of diagrammatic reasoning in the brainstorming process does not have to be abandoned at the stage of proof, but instead should be appreciated and could be preserved in mathematical proofs.
The evolution of Euler diagrams is examined from Euler's original system through the modifications made by Venn and Peirce. It is shown that these modifications were motivated by an attempt to increase the expressivity of the diagrams, but that a side effect of these modifications was a loss of the visual clarity of Euler's original system. Euler's original system is reconstructed from a modern, logical point of view. Formal semantics and rules of inference are provided for this reconstruction of Euler's (...) system, and basic logical properties are proved. (shrink)
Organizational citizenship behaviors (OCBs) are essential for effective organizational functioning. Decisions by employees to engage in these important discretionary behaviors are based on how they make sense of the organizational context. Using fairness heuristic theory, we tested two important OCB predictors: manager trustworthiness and interactional justice. In the process, we control for the effects of dispositional factors (propensity to trust) and for system-based organizational fairness (procedural and distributive justice). Results, based on surveys collected from 120 employee–supervisor dyads, indicate that manager (...) trustworthiness explains variance in OCBs over and above the variance accounted for by interactional fairness. Implications for theory and practice are discussed. (shrink)
Many transnational corporations and international organizations have embraced corporate social responsibility (CSR) to address criticisms of working and environmental conditions at subcontractors’ factories. While CSR ‹codes of conduct’ are easy to draft, supplier compliance has been elusive. Even third-party monitoring has proven an incomplete solution. This article proposes that an alteration in the supply chain’s governance, from an arms-length market model to a collaborative partnership, often will be necessary to effectuate CSR. The market model forces contractors to focus on price (...) and delivery as they compete for the lead firm’s business, rendering CSR observance secondary, at best. A collaborative partnership where the lead firm gives select suppliers secure product orders and other benefits removes disincentives and adds incentives for CSR compliance. In time, the suppliers’ CSR habit should shift their business philosophy toward pursuing CSR as an end in itself, regardless of buyer incentives and monitoring. This article examines these hypotheses in the context of the athletic footwear sector with Nike, Inc. and its suppliers as the specific case study. The data collected and conclusions reached offer strategies for advancing CSR beyond the superficial and often ineffectual ‹code of conduct’ stage. (shrink)
Based on an integrated theoretical framework, this study analyzes user acceptance behavior toward socially interactive robots focusing on the variables that influence the users' attitudes and intentions to adopt robots. Individuals' responses to questions about attitude and intention to use robots were collected and analyzed according to different factors modified from a variety of theories. The results of the proposed model explain that social presence is key to the behavioral intention to accept social robots. The proposed model shows the significant (...) roles of perceived adaptivity and sociability, both of which affect attitude as well as influence perceived usefulness and perceived enjoyment, respectively. These factors can be key features of users' expectations of social robots, which can give practical implications for designing and developing meaningful social interaction between robots and humans. The new set of variables is specific to social robots, acting as factors that enhance attitudes and behavioral intentions in human-robot interactions. Keywords: Robot acceptance model; Socially interactive robots; Social robots; Social presence. (shrink)
This paper proposes that Levinas's philosophy of alterity and infinitude based upon the ethical relation between Self and Other - is both profound and limited in its ability to account for social practice. Instead of simply accepting the common criticism of Levinas, however, that he places an intolerable ethical burden of infinitude upon human relations, this paper aims to move beyond this impasse by placing Levinas's metaphysics within a frame that privileges the dynamic between the Self and the Other as (...) a socially oriented, participative practice of teaching and learning. It is suggested that Etienne Wenger's work on the emergence of identity as a constant negotiation between the Others and the Self provides a conceptual framework for how business ethics may be owned, negotiated and learned within organizational communities without sacrificing the horizon of infinitude bestowed upon us by Levinas's ethical philosophy. Finally, the practical implications of such a comparative approach for the teaching of alterity in business ethics are discussed. (shrink)
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ (...) categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players’ responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5 h across 5 days exhibited improvements in /r/-/l/ perception on par with 2–4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. (shrink)
The importance of the notion of common knowledge in sustaining cooperative outcomes in strategic situations is well appreciated. However, the systematic analysis of the extent to which small departures from common knowledge affect equilibrium in games has only recently been attempted.We review the main themes in this literature, in particular, the notion of common p-belief. We outline both the analytical issues raised, and the potential applicability of such ideas to game theory, computer science and the philosophy of language.
Some of the important conceptual debates between different approaches to class analysis can be interpreted as reflecting different ways of linking temporality to class structure. In particular, processual concepts of class can be viewed as linking class to the past whereas structural concepts link class to the future. This contrast in the temporality of class concepts in turn is grounded in distinct intuitions about why class is explanatory of social conflict and social change. Processural approaches to class see its explanatory (...) power as deriving from the way meanings and identities are linked to class via a history of experiences; structural approaches, in contrast, emphasize the linkage between class and perceived interests via the objective possibilities facing people in different class locations. This paper tries to integrate these two temporalities by exploring the ways in which trajectories of class experience intersect structures of objective possibility in shaping different dimensions of class consciousness. (shrink)
In this paper, I criticize naturalized epistemology. To this end, I critically examine several versions of naturalistic epistemology (Quine, Kornblith, and Plantinga). While Quine’s epistemology eschews any kind of normativity not invoked in science, Kornblith’s and Plantinga’s views attempt to explain normativity in the light of descriptivity. I provide an argument against them. The upshot of my argument is that since we are self-conscious beings, we have reflective ability to see what we ought to believe. In other words, the fact (...) that we are self-conscious beings requires us to find reason for our belief. I argue that naturalistic epistemology cannot capture that idea, since it is only concerned with third-person, impersonal approach. It simply shifts our thinking about justification from a subjective or first-person perspective to an objective or third-person perspective. Therefore, naturalistic epistemology, even if it is a weak version, is untenable in that it simply ignores human consciousness and its role in justification of beliefs. (shrink)
The construct of moral intensity, proposed by Jones (1991), was used to predict the extent to which individuals were able to recognize moral issues. We tested for the effects of the six dimensions of moral intensity: social consensus, proximity, concentration of effect, probability of effect, temporal immediacy and magnitude of consequences. A scenario-based study, conducted among business individuals in Singapore, revealed that social consensus and magnitude of consequences influenced the recognition of moral issues. The study provided evidence for the effects (...) of temporal immediacy. There was marginal support for the impact of proximity and probability of effect but no evidence that concentration of effect influenced recognition of moral issues. The paper concludes with a discussion of the implications of these results for researchers and organisational practitioners. (shrink)
This article offers a comparative study of three thinkers from almost as many intellectual and cultural traditions: Avicenna, Maimonides, and Gersonides, and discusses the extent of the knowledge of particulars which each one ascribed to God. Avicenna de-reified Aristotle’s abstract and isolated Prime Mover and argued that God can know particulars but limited these to universals. Maimonides disanalogized divine from human knowledge, arguing that the epistemic mode predicated of mankind cannot be equally predicated of God, and that God knows particulars (...) qua particulars even as his Knowing encompasses all of eternity in a single act of knowledge. Attempting an intermediate path between the former’s highly discursive reasoning and the latter’s more scriptural approach, Gersonides postulated that God can know particulars qua particulars—as is befitting a Perfect Being—but this He does ‘mediately’ as it were, via the emanative ordering comprising the separate intelligences and culminating in the Active Intellect. (shrink)
This paper presents the critical role of corporate responsibility in the sustainability of health care programs in lower income communities mostly located in the rural areas. The Leaders for Health Program (LHP)—a tri-partite partnership between the Philippine Department of Health, the Health Unit of the Ateneo de Manila University Graduate School of Business, and Pfizer Philippines, Inc.—is an innovative approach focusing on health promotion and education as the cornerstone for community development. LHP adopts a systemic and comprehensive approach that takes (...) into consideration all the major stakeholders in health, especially in rural communities. This paper aims to support the viability of education as the main catalyst for community empowerment and self-sufficiency. (shrink)
Logicians have strongly preferred first-order natural deductive systems over Peirce's Beta Graphs even though both are equivalent to each other. One of the main reasons for this preference, I claim, is that inference rules for Beta Graphs are hard to understand, and, therefore, hard to apply for deductions. This paper reformulates the Beta rules to show more fine-grained symmetries built around visual features of the Beta system, which makes the rules more natural and easier to use and understand. Noting that (...) the rules of a natural deductive system are natural in a different sense, this case study shows that the naturalness and the intuitiveness of rules depends on the type of representation system to which they belong. In a diagrammatic system, when visual features are discovered and fully used, we have a more efficacious deductive system. I will also show that this project not only helps us to apply these rules more easily but to understand the validity of the system at a more intuitive level. (shrink)
This paper considers the ethical implications of applying three major ethical theories to the memory structure of an artificial companion that might have different embodiments such as a physical robot or a graphical character on a hand-held device. We start by proposing an ethical memory model and then make use of an action-centric framework to evaluate its ethical implications. The case that we discuss is that of digital artefacts that autonomously record and store user data, where this data are used (...) as a resource for future interaction with users. (shrink)
Jaegwon Kim argues that if mental properties are irreducible with respect to physical properties then mental properties are epiphenomenal. I believe this conditional is false and argue that mental properties, along with their physical counterparts, may overdetermine their effects. Kim contends, however, that embracing overdetermination in the mental case, due to supervenience, renders the attribution of overdetermination vacuous. This way of blocking the overdetermination option, however, makes the attribution of mental epiphenomenalism equally vacuous. Furthermore, according to Kim’s own logic, physical (...) properties, and not mental properties, may be in danger of losing their causal relevance. (shrink)
This essay analyzes how the zhengming 正名 theory of Confucius is linked to the problem of “observances of form” in light of the methodology of Confucian aesthetics. This essay argues that the “name-shape” combination in the zhengming paradigm is ultimately connected with the “name-role” combination. The “name-shape” paradigm continuously maintains and strengthens the “name-role” paradigm. However, the “name-shape” paradigm itself ultimately becomes more meaningful than the “name-role” paradigm. This is because the aesthetic structure that appears peculiar in the Analects constitutes (...) the “name-shape” paradigm. In this aesthetic structure, what is ultimately important is “form.”. (shrink)
Bringing culture and personality in a combination with emotions requires bringing three different theories together. In this paper, we discuss an approach for combining Hofstedeâs cultural dimensions, BIG five personality parameters and PSI theory of emotions to come up with an emergent affective character model.
This study assessed the knowledge and perception of human biological materials (HBM) and biorepositories among three study groups in South Korea. The relationship between the knowledge and the perception among different groups was also examined by using factor and regression analyses. In a self-reporting survey of 440 respondents, the expert group was found more likely to be knowledgeable and positively perceived than the others. Four factors emerged: Sale and Consent, Flexible Use, Self-Confidence, and Korean Bioethics and Biosafety Action restriction perception. (...) The results indicate that those who are well aware of the existence of biobanks were more positively inclined to receive the Sale and Consent perception. As a result of the need for high quality HBMs and the use of appropriate sampling procedures for every aspect of the collection and use process, the biorepository community should pay attention to ethical, legal, and policy issues. (shrink)
For the most part, the primary driver for international businesses in establishing operations in other countries is the reduction of overall operating costs. Host countries, especially developing nations, welcome multinational corporations (MNCs) because of the perceived economic benefits that international businesses can bring to their local communities. Surprisingly, one of the most understudied, under-analyzed, and sometimes even completely neglected factors when international businesses consider setting up shop in other countries is the local culture of their chosen destination country. This paper (...) substantiates the thesis that international businesses should adapt their corporate practices to the local cultures in which they operate to achieve effective and superior businessperformance. The paper goes further in identifying corporate practices that were adapted or revised by international businesses to respond to the culture of local communities in the Philippines. (shrink)
My project in this paper is to provide a plausible idea of Christ’s suffering and death in terms of two theories of the human person. One is dualism. Dualism is the view that a human person is composed of two substances, that is, a soul and a body, and he (strictly speaking) is identical with the soul. On the other hand, physicalism is the view that a human person is numerically identical with his body. I will argue that dualism is (...) not successful in explaining Christ’s passion for some reasons. Rather, physicalism, as I shall argue, provides a better explanation of how Christ’s physical suffering and death are real just like everyone else’s, so it is philosophically and theologically more plausible than dualism. (shrink)
German idealist philosopher J. G. Fichte (1762‐1814), as an heir to Kant, sought to uniformity of reason in his own philosophical system Wissenschaftslehre. However, the political implications of his philosophy have dual aspects. The first is his own political theory presented in accordance with his philosophical principles. The second is a set of political influences concerning his practical position together with his philosophy. By and large it has been the second aspect that Fichte’s nationalistic perspectives were interpreted upon. So the (...) political implications of his philosophy have been frequently reduced as a prophet of ethnic German nationalism and Nazism. But we need to distinguish his systematic theory from the influences resulted from his practical attitudes. Because he proposed an alternative idea of nationalism built on the basis of his philosophical principles. In reference to ‘nationality’, what Fichte has in mind was the activeness of man and the universality of the structure which operates while he or she is acting. For Fichte, activeness of consciousness and life are the one thing. With this presupposition, ‘nationality’ is conceptualized as the phase of commonness and reciprocity that comes into being among the self-forming conscious beings. Therefore his idea of ‘nationality’ couldn’t be grasped all in primordial dimension such as in ethnic nationalism. The original and fundamental base of nationality is man’s acting power working constantly toward perfection of man himself. (shrink)
Are values and social priorities universal, or do they vary across geography, culture, and time? This question is very relevant to Asia’s emerging economies that are increasingly looking at Western models for answers to their own outmoded health care systems that are in dire need of reform. But is it safe for them to do so without sufficient regard to their own social, political, and philosophical moorings? This article argues that historical and cultural legacies influence prevailing social values with regard (...) to health care financing and resource allocation, and that the Confucian dimension provides a helpful entry point for a deeper understanding of ongoing health care reforms in East Asia – as exemplified by the unique case of Singapore. (shrink)
In his so-called argument from consciousness (AC), J. P. Moreland argues that the phenomenon of consciousness furnishes us with evidence for the existence of God. In defending AC, however, Moreland makes claims that generate an undesirable tension. This tension can be posed as a dilemma based on the contingency of the correlation between mental and physical states. The correlation of mental and physical states is either contingent or necessary. If the correlation is contingent then epiphenomenalism is true. If the correlation (...) is necessary then a theistic explanation for the correlation is forfeit. Both are unwelcome results for AC. (shrink)
How do we know the degree of imagination involved in knowing a reality? This is essentially an epistemological question. This essay discusses first the role of imagination in Polanyi’s epistemology since it is used here as the basis of integrative reality. The essay then discusses the degree of imagination involved in three types of integrative reality that are found respectively in technology, science, and humanities. It concludes with a discussion on the role of imagination in education.
In an article in the Journal of Philosophical Logic in 1996, “Towards a Model Theory of Venn Diagrams,” (Vol. 25, No. 5, pp. 463–482), Hammer and Danner proved the full completeness of Shin’s formal system for reasoning with Venn Diagrams. Their proof is eight pages long. This note gives a brief five line proof of this same result, using connections between diagrammatic and sentential representations.
In the face of mounting criticism against advance directives, we describe how a novel, computer-based decision aid addresses some of these important concerns. This decision aid, Making Your Wishes Known: Planning Your Medical Future , translates an individual's values and goals into a meaningful advance directive that explicitly reflects their healthcare wishes and outlines a plan for how they wish to be treated. It does this by (1) educating users about advance care planning; (2) helping individuals identify, clarify, and prioritize (...) factors that influence their decision-making about future medical conditions; (3) explaining common end-of-life medical conditions and life-sustaining treatment; (4) helping users articulate a coherent set of wishes with regard to advance care planning—in the form of an advance directive readily interpretable by physicians; and (5) helping individuals both choose a spokesperson, and prepare to engage family, friends, and health care providers in discussions about advance care planning. (shrink)
It has been argued that, on Kantian grounds, pedophiles, rapists and murderers are morally obligated to take their own lives prior to committing a violent action that will end their moral agency. That is, to avoid destroying the agent's moral life by performing a morally suicidal action, the agent, while he still is a moral agent, should end his body's life. Although the cases of dementia and the morally reprehensible are vastly different, this Kantian interpretation might be useful in the (...) debate on the permissibility of suicide for those facing dementia's effects. If moral agents have a duty to act as moral agents, then those who will lose their moral identity as moral agents have an obligation to themselves to end their physical lives prior to losing their dignity as persons. (shrink)
The fact that the notion of ‘practice’ has achieved an ever-increasing relevance in the most various fields of knowledge must not overshadow that it can be interpreted in so many different ways as to orient fairly different historiographical paradigms and philosophical conceptions. Starting with the two main issues of Hadot’s criticism of Foucault (the lack of a distinction between joy and pleasure and the fact that his account does not underscore that the individual Self is ultimately transcended by universal Reason), (...) I have tried to show how the two scholars’ philosophical and historiographical approaches entail a different notion of ‘practice’. According to Hadot, the performativity of a practice (or spiritual exercise) is intimately tied to a universal which transcends the individual self, whereas Foucault maintains that it does not require the appeal to any universal, being exclusively grounded on the modes of exertion of the practices which constitute the individual Self. According to this address, pleasure is a fundamental notion in order to historicize the different ways in which the ethical subject structures itself. (shrink)
Abstract This study was designed to investigate the factors affecting ethical practices of public relations professionals in public relations firms. In particular, the following organizational ethics factors were examined: (1) presence of ethics code, (2) top management support for ethical practice, (3) ethical climate, and (4) perception of the association between career success and ethical practice. Analysis revealed that the presence of an ethics code along with top management support and a non-egoistic ethical climate within public relations firms significantly influenced (...) public relations professionals' ethical practices. Content Type Journal Article Category Original Paper Pages 1-19 DOI 10.1007/s13520-011-0013-1 Authors Eyun-Jung Ki, Department of Advertising and Public Relations, College of Communication and Information Sciences, The University of Alabama, Box 870172, Tuscaloosa, AL 35487-0172, USA Junghyuk Lee, Division of Communication Arts, Kwangwoon University, Seoul, South Korea Hong-Lim Choi, School of Communication, Sun Moon University, 100, Kalsan-ri, Tangjeong-myeon, Asan-si, Chungnam 336-708, South Korea Journal Asian Journal of Business Ethics Online ISSN 2210-6731 Print ISSN 2210-6723. (shrink)
Song, Hongbing 宋洪兵, New Studies of han Feizi’s Political Thought 韓非子政治思想再硏究 Content Type Journal Article Pages 1-4 DOI 10.1007/s11712-012-9265-2 Authors Soon-ja Yang, Inha University, 253 Yonghyeon 4-dong, Nam-gu, Incheon, South Korea 402-751 Journal Dao Online ISSN 1569-7274 Print ISSN 1540-3009.
In his Meditations, Rene Descartes asks, "what am I?" His initial answer is "a man." But he soon discards it: "But what is a man? Shall I say 'a rational animal'? No: for then I should inquire what an animal is, what rationality is, and in this way one question would lead down the slope to harder ones." Instead of understanding what a man is, Descartes shifts to two new questions: "What is Mind?" and "What is Body?" These questions develop (...) into Descartes's main philosophical preoccupation: the Mind-Body distinction. How can Mind and Body be independent entities, yet joined--essentially so--within a single human being? If Mind and Body are really distinct, are human beings merely a "construction"? On the other hand, if we respect the integrity of humans, are Mind and Body merely aspects of a human being and not subjects in and of themselves? For centuries, philosophers have considered this classic philosophical puzzle. Now, in this compact, engaging, and long-awaited work UCLA philosopher Joseph Almog closely decodes the French philosopher's argument for distinguishing between the human mind and body while maintaining simultaneously their essential integration in a human being. He argues that Descartes constructed a solution whereby the trio of Human Mind, Body, and Being are essentially interdependent yet remain each a genuine individual subject. Almog's reading not only steers away from the most popular interpretations of Descartes, but also represents a scholar coming to grips directly with Descartes himself. In doing so, Almog creates a work that Cartesian scholars will value, and that will also prove indispensable to philosophers of language, ontology, and the metaphysics of mind. (shrink)
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...) surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work.. (shrink)
It is an unfortunate fact of academic life that there is a sharp divide between science and philosophy, with scientists often being openly dismissive of philosophy, and philosophers being equally contemptuous of the naivete ́ of scientists when it comes to the philosophical underpinnings of their own discipline. In this paper I explore the possibility of reducing the distance between the two sides by introducing science students to some interesting philosophical aspects of research in evolutionary biology, using biological theories of (...) the origin of religion as an example. I show that philosophy is both a discipline in its own right as well as one that has interesting implications for the understanding and practice of science. While the goal is certainly not to turn science students into philoso- phers, the idea is that both disciplines cannot but benefit from a mutual dialogue that starts as soon as possible, in the classroom. (shrink)
If you start taking courses in contemporary cognitive science, you will soon encounter a particular picture of the human mind. This picture says that the mind is a lot like a computer. Specifically, the mind is made up of certain states and certain processes. These states and processes interact, in accordance with certain general rules, to generate specific behaviors. If you want to know how those states and processes got there in the first place, the only answer is that they (...) arose through the interaction of other states and processes, which arose from others... until, ultimately, the chain goes back to factors in our genes and our environment. Hence, one can explain human behavior just by positing a collection of mental states and psychological processes and discussing the ways in which these states and processes interact. This picture of the mind sometimes leaves people feeling deeply uncomfortable. They find themselves thinking something like: 'If the mind actually does work like that, it seems like we could never truly be morally responsible for anything we did. After all, we would never be free to choose any behavior other than the one we actually performed. Our behaviors would just follow inevitably from certain facts about the configuration of the states and processes within us.' Many philosophers think that this sort of discomfort is fundamentally confused or wrongheaded. They think that the confusion here can be cleared up just by saying something like: 'Wait! It doesn't make any sense to say that the interaction of these states and processes is preventing you from controlling your own life. The thing you are forgetting is that the interaction of these states and processes – this whole complex system described by cognitive science – is simply you. So when you learn that these states and processes control your behavior, all you are learning is that you are controlling your behavior. There is no reason at all to see these discoveries as a threat to your freedom or responsibility.'2 Philosophers may regard this argument as a powerful one, perhaps even irrefutable.. (shrink)
The Phenomenological Mind, by Shaun Gallagher and Dan Zahavi, is part of a recent initiative to show that phenomenology, classically conceived as the tradition inaugurated by Edmund Husserl and not as mere introspection, contributes something important to cognitive science. (For other examples, see “References” below.) Phenomenology, of course, has been a part of cognitive science for a long time. It implicitly informs the works of Andy Clark (e.g. 1997) and John Haugeland (e.g. 1998), and Hubert Dreyfus explicitly uses it (e.g. (...) 1992). But where the former use phenomenology in the background as broad context and Dreyfus uses it primarily (though not exclusively) as a critique of conventional AI, Gallagher and Zahavi wish to indicate a positive and constructive place for it within cognitive science. They do not recommend that we simply accept pronouncements of thinkers like Husserl, Heidegger, Sartre and Merleau‐Ponty and apply them to questions of cognition, but that we use revised forms of phenomenology to illuminate dimensions of cognitive experience that are missing in current research. The book is presented as an “introduction to philosophy of mind and cognitive science” written from a phenomenological perspective. It seeks to justify the use of phenomenology in cognitive science by showing what kinds of questions it asks and answers, the variety of uses to which it has recently been put and the fruitfulness of some of its findings. The catalog of topics, for the most part, matches other introductions to the philosophy of mind, such as questions of method, consciousness, perception, intentionality, embodiment, action, agency and other minds. One issue presented here that is not generally dealt with in existing philosophy of mind and cognitive science texts is temporality, a mainstay of the continental tradition. After an introductory chapter that places phenomenology in the context of other approaches, the book lays out the main tenets of phenomenological method. Here, one encounters expected components of phenomenology: the epoché (described below), phenomenological reduction, eidetic variation, and so on. This traditional fare is soon followed by some potential surprises, namely, attempts to “naturalize” phenomenology, a few attempts to formalize it, and the emergence of ‘neurophenomenology’. Each of these is a bit surprising because Husserl was a vocal critic of naturalism, seeing transcendental phenomenology as an alternative to the empirical study of consciousness. He was also skeptical about the possibilities of mathematizing phenomenology. Gallagher and Zahavi acknowledge these points, but since they are not repeating history or undertaking exegesis, strict adherence to canonical phenomenology is not required. Naturalizing phenomenology means recognizing that “the phenomena it studies are part of nature and are therefore also open to empirical investigation” (p.. (shrink)
phenomena that are hallmarks of what it is to be human free will whether or not the universe is deterministic, many [1,2,4,26]. There is now a widespread and industrious people think that freedom can yet be salvaged if the scientiﬁc community, whose aim is to understand the universe is indeterministic, for they favor a Libertarian mechanisms underlying these phenomena [7,9,10, account which posits an agent as an uncaused cause 27–32]. The underlying worry is that those things that [17,18]. In that (...) case, trouble arises if the universe is once seemed to be forever beyond the reach of science deterministic. might soon succumb to it: neuroscience will lead us to see the ‘universe within’ as just part and parcel of the. (shrink)
Benjamin Libet's empirical challenge to free will has received a great deal of attention and criticism. A standard line of response has emerged that many take to be decisive against Libet's challenge. In the first part of this paper, I will argue that this standard response fails to put the challenge to rest. It fails, in particular, to address a recent follow-up experiment that raises a similar worry about free will (Soon, Brass, Heinze, & Haynes, 2008). In the second part, (...) however, I will argue that we can altogether avoid Libet-style challenges if we adopt a traditional compatibilist account of free will. In the final section, I will briefly explain why there is good and independent reason to think about free will in this way. (shrink)
Genes are often described by biologists using metaphors derived from computa- tional science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Impor- tantly, (...) modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories. However, the living organ- ism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do. In this article we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public. (shrink)
This paper explores the trade-off between cognitive effort and cognitive effects during immediate metaphor comprehension. We specifically evaluate the fundamental claim of relevance theory that metaphor understanding, like all utterance interpretation, is constrained by the presumption of optimal relevance (Sperber and Wilson, 1995, p. 270): the ostensive stimulus is relevant enough for it to be worth the addressee's effort to process it, and the ostensive stimulus is the most relevant one compatible with the communicator's abilities and preferences. One important implication (...) of optimal relevance is that listeners follow a path of least effort and stop processing at the first interpretation that satisfies their expectation of relevance. They do this by trying to minimize cognitive effort while maximizing cognitive effects. Some relevance theory scholars suggest that metaphors should require additional cognitive effort to be understood, and that in return they yield more cognitive effects than does literal speech. Others claim that metaphors may be understood quickly, as soon as people infer enough effects for the speaker's utterance to meet their expectation of optimal relevance. Our analysis of the experimental evidence suggests that there is no systematic relationship between cognitive effort and cognitive effects in metaphor comprehension. We conclude that relevance theory need not make any general predictions about the effort needed to comprehend metaphors. Nevertheless, relevance theory is consistent with many of the findings in psycholinguistics on metaphor understanding, and can account for aspects of metaphor understanding that no other theory can explain. (shrink)
There is a lot that we don’t know. That means that there are a lot of possibilities that are, epistemically speaking, open. For instance, we don’t know whether it rained in Seattle yesterday. So, for us at least, there is an epistemic possibility where it rained in Seattle yesterday, and one where it did not. It’s tempting to give a very simple analysis of epistemic possibility: • A possibility is an epistemic possibility if we do not know that it does (...) not obtain. But this is problematic for a few reasons. One issue, one that we’ll come back to, concerns the first two words. The analysis appears to quantify over possibilities. But what are they? As we said, that will become a large issue pretty soon, so let’s set it aside for now. A more immediate problem is that it isn’t clear what it is to have de re attitudes towards possibilities, such that we know a particular possibility does or doesn’t obtain. Let’s try rephrasing our analysis so that it avoids this complication. (shrink)