develops themes from the dissertation. I argue that two models of prosopagnosia are best understood as idealizing models, and as such are subject to importantly different methodological constraints from non-idealized theories of face recognition.
In a target article that appeared in this journal, Thomas Stoffregen 2000 questions the possibility of ecological event perception research. This paper describes an experiments performed to examine the perception of the disappearance of gap-crossing affordances, a variety of event as defined by Chemero 2000. We found that subjects reliably perceive both gap-crossing affordances and the disappearance of gap-crossing affordances. Our findings provide empirical evidence in favor of understanding events as changes in the layout of affordances, shoring up event perception (...) research in ecological psychology. (shrink)
Reasonable individuals can disagree about philosophical questions. This disagreement sometimes takes the form of conflicting intuitions; the seminar room provides many examples. Experimental philosophers, who have devoted themselves to the systematic study of intuitions, have found empirical support for what anecdotes suggest. Their data often reveals that a significant minority of subjects have intuitions counter to those of the majority.1 A recent replication of [Knobe, 2003a] discovered three distinct subgroups of subjects with three distinct patterns of response. Only about one-third (...) of subjects gave the expected asymmetric responses to Knobestyle probes. The other two-thirds gave symmetric responses to harm and help probes, with about half judging side-effects were intentional in both harm and help cases and half judging that they were intentional in neither [Nichols and Ulatowski, 2007]. (shrink)
The problem with idealization is not just that, when idealizing, scientists ask us to suppose false things. Many people do that. No, the puzzling thing about idealizers—unlike astrologers, spodomancers, and homeopaths—is that it is worth listening to them. Supposing that populations of rabbits are in- finite is useful for a variety of ecological explanations. Yet we are not up to our necks in rabbits; the puzzle is why it should be useful to suppose that we are.
Are spheres multiply realizable? A venerable tradition implies that they are. Putnam’s discussion of the peg and holes (in [Putnam, 1975]) is often taken to show that all volumetric shape properties are multiply realizable . The argument runs: (a) physics is the science of the “ultimate constituents” (Putnam’s phrase) of matter, and so (b) physics can only track the behavior of each of the simple constituents of a particular system, but (c) tediously tracking individual particles doesn’t make for a very (...) good explanation, so (d) there must be an explanation outside of physics that does talk about shape, and that we should prefer because it abstracts away from the micro-level details of particular spheres. (shrink)
Functional magnetic resonance imaging (or fMRI)1 is widely used to support hypotheses about brain function. Many find the images produced from fMRI data to be especially compelling evidence for scientific hypotheses [McCabe and Castel, 2008]. There are many problems with all of this; I want to start with two of them, and argue that they get us closer to an under-appreciated worry about many imaging experiments.
Pains motivate us. Must they? Motivationalists about pain say yes: motivational force is an intrinsic property of pains. Many disagree. The debate is partly empirical. Find someone who is entirely unmoved by pain, and motivationalism is threatened. Fail repeatedly to find such a case, and motivationalism gains credence.
We discuss two named-entity recognition models which use characters and character n-grams either exclusively or as an important part of their data representation. The ﬁrst model is a character-level HMM with minimal context information, and the second model is a maximum-entropy conditional markov model with substantially richer context features. Our best model achieves an overall F1 of 86.07% on the English test data (92.31% on the development data). This number represents a 25% error reduction over the same model without word-internal (...) (substring) features. (shrink)
Method by subtracting off the error along several nonprincipal eigenvectors from the current iterate of the Power Method, making use of known nonprincipal eigenvalues of the Web hyperlink matrix. Empirically, we show that using Power Extrapolation speeds up PageRank computation by 30% on a Web graph of 80 million nodes in realistic scenarios over the standard power method, in a way that is simple to understand and implement.
While Ç ´Ò¿ µ methods for parsing probabilistic context-free grammars (PCFGs) are well known, a tabular parsing framework for arbitrary PCFGs which allows for botton-up, topdown, and other parsing strategies, has not yet been provided. This paper presents such an algorithm, and shows its correctness and advantages over prior work. The paper ﬁnishes by bringing out the connections between the algorithm and work on hypergraphs, which permits us to extend the presented Viterbi (best parse) algorithm to an inside (total probability) (...) algorithm. (shrink)
We present a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. Parameter search with EM produces higher quality analyses than previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher F1 of 71% on nontrivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine extensions to the basic (...) model. We discuss errors made by the system, compare the system to previous models, and discuss upper bounds, lower bounds, and stability for this task. (shrink)
A* PCFG parsing can dramatically reduce the time required to find the exact Viterbi parse by conservatively estimating outside Viterbi probabilities. We discuss various estimates and give efficient algorithms for computing them. On Penn treebank sentences, our most detailed estimate reduces the total number of edges processed to less than 3% of that required by exhaustive parsing, and even a simpler estimate which can be pre-computed in under a minute still reduces the work by a factor of 5. The algorithm (...) extends the classic A* graph search procedure to a certain hypergraph associated with parsing. Unlike bestfirst and finite-beam methods for achieving this kind of speed-up, the A* parser is guaranteed to return the most likely parse, not just an approximation. The algorithm is also correct for a wide range of parser control strategies and maintains a worst-case cubic time bound. (shrink)
We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is (...) much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize. (shrink)
This paper separates conditional parameter estima- tion, which consistently raises test set accuracy on statistical NLP tasks, from conditional model struc- tures, such as the conditional Markov model used for maximum-entropy tagging, which tend to lower accuracy. Error analysis on part-of-speech tagging shows that the actual tagging errors made by the conditionally structured model derive not only from label bias, but also from other ways in which the independence assumptions of the conditional model structure are unsuited to linguistic sequences. The (...) paper presents new word-sense disambiguation and POS tagging experiments, and integrates apparently conﬂicting reports from other recent work. (shrink)
Unsupervised grammar induction systems commonly judge potential constituents on the basis of their effects on the likelihood of the data. Linguistic justiﬁcations of constituency, on the other hand, rely on notions such as substitutability and varying external contexts. We describe two systems for distributional grammar induction which operate on such principles, using part-of-speech tags as the contextual features. The advantages and disadvantages of these systems are examined, including precision/recall trade-offs, error analysis, and extensibility.
We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efﬁcient, exact inference.
We present an improved method for clustering in the presence of very limited supervisory information, given as pairwise instance constraints. By allowing instance-level constraints to have spacelevel inductive implications, we are able to successfully incorporate constraints for a wide range of data set types. Our method greatly improves on the previously studied constrained -means algorithm, generally requiring less than half as many constraints to achieve a given accuracy on a range of real-world data, while also being more robust when over-constrained. (...) We additionally discuss an active learning algorithm which increases the value of constraints even further. (shrink)
erative clustering. First, we show formally that the common heuristic agglomerative clustering algorithms – Ward’s method, single-link, complete-link, and a variant of group-average – are each equivalent to a hierarchical model-based method. This interpretation gives a theoretical explanation of the empirical behavior of these algorithms, as well as a principled approach to resolving practical issues, such as number of clusters or the choice of method. Second, we show how a model-based viewpoint can suggest variations on these basic agglomerative algorithms. We (...) introduce adjusted complete-link, Mahalanobis-link, and line-link as variants, and demonstrate their utility. (shrink)
This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
While symbolic parsers can be viewed as deduction systems, this view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an Ç´Ò¿µ time bound for arbitrary PCFGs, while preserving as much of the ﬂexibility of symbolic chart parsers as allowed by the inherent (...) ordering of probabilistic dependencies. (shrink)
This paper presents empirical studies and closely corresponding theoretical models of the performance of a chart parser exhaustively parsing the Penn Treebank with the Treebank’s own CFG grammar. We show how performance is dramatically affected by rule representation and tree transformations, but little by top-down vs. bottom-up strategies. We discuss grammatical saturation, including analysis of the strongly connected components of the phrasal nonterminals in the Treebank, and model how, as sentence length increases, the effective grammar rule size increases as regions (...) of the grammar are unlocked, yielding super-cubic observed time behavior in some conﬁgurations. (shrink)
This paper discusses ensembles of simple but heterogeneous classiﬁers for word-sense disambiguation, examining the Stanford-CS224N system entered in the SENSEVAL-2 English lexical sample task. First-order classiﬁers are combined by a second-order classiﬁer, which variously uses majority voting, weighted voting, or a maximum entropy model. While individual ﬁrst-order classiﬁers perform comparably to middle-scoring teams’ systems, the combination achieves high performance. We discuss trade-offs and empirical performance. Finally, we present an analysis of the combination, examining how ensemble performance depends on error independence (...) and task difﬁculty. (shrink)
Robert Rupert is well-known as an vigorous opponent of the hypothesis of extended cognition (HEC). His Cognitive Systems and the Extended Mind is a first-rate development of his “systems-based” approach to demarcating the mind. The results are impressive. Rupert’s account brings much-needed clarity to the often-frustrating debate over HEC: much more than just an attack on HEC, he gives a compelling picture of why the debate matters.
It would be nice if our definition of ‘physical’ incorporated the distinctive content of physics. Attempts at such a definition quickly run into what’s known as Hempel’s dilemma. Briefly: when we talk about ‘physics’, we refer either to current physics or to some idealized version of physics. Current physics is likely wrong and so an unsuitable basis for a definition. ‘Ideal physics’ can’t itself be cashed out except as the science which has completed an accurate survey of the physical; appeals (...) to it to define the physical must therefore end up trivial or circular. So defining the physical in terms of physics looks like a non-starter. (shrink)
Amputation of a limb can result in the persistent hallucination that the limb is still present [Ramachandran and Hirstein, 1998]. Distressingly, these socalled ‘phantom limbs’ are often quite painful. Of a friend whose arm had been amputated due to gas gangrene, W.K. Livingston writes: I once asked him why the sense of tenseness in the hand was so frequently emphasized among his complaints. He asked me to clench my fingers over my thumb, flex my wrist, and raise the arm into (...) a hammerlock position and hold it there. He kept me in this position as long as I could stand it. At the end of five minutes I was perspiring freely, my hand and arm felt unbearably cramped, and I quit. But you can take your hand down, he said. (quoted in [Melzack, 1973] 53) In addition to the obvious medical issues, phantom limb pain also presents philosophical problems. Here’s a thorny one: are phantom limb pains hallucinations of pain? (shrink)
In this paper, I first consider a famous objection that the standard interpretation of the Lockean account of diachronicity (i.e., one’s sense of personal identity over time) via psychological connectedness falls prey to breaks in one’s personal narrative. I argue that recent case studies show that while this critique may hold with regard to some long-term autobiographical self-knowledge (e.g., episodic memory), it carries less warrant with respect to accounts based on trait-relevant, semantic felfknowledge. The second issue I address concerns the (...) question of diachronicity from the vantage point that there are (at least) two aspects of self—the self of psychophysical instantiation (what I term the epistemological self) and the self of first person subjectivity (what I term the ontological self; for discussion, see Klein SB, The self and its brain, Social Cognition, 30, 474–518, 2012). Each is held to be a necessary component of selfhood, and, in interaction, they are appear jointly sufficient for a synchronic sense of self (Klein SB, The self and its brain, Social Cognition, 30, 474–518, 2012). As pertains to diachronicity, by contrast, I contend that while the epistemological self, by itself, is precariously situated to do the work required by a coherent theory of personal identity across time, the ontological self may be better positioned to take up the challenge. (shrink)
Multiply realizable properties are those whose realizers are physically diverse. It is often argued that theories which contain them are ipso facto irreducible. These arguments assume that physical explanations are restricted to the most specific descriptions possible of physical entities. This assumption is descriptively false, and philosophically unmotivated. I argue that it is a holdover from the late positivist axiomatic view of theories. A semantic view of theories, by contrast, correctly allows scientific explanations to be couched in the most perspicuous, (...) powerful language available. On a semantic view, traditional notions of multiple realizability are thus very hard to motivate. At best, one must abandon either the idea that multiple realizability is an interesting scientific notion, or else admit that multiply realizable properties do not automatically block scientific reductions. (shrink)
This article explores the connection between the Heng Xian and the Changes of Zhou tradition, especially the “Tuan” and “Attached Verbalizations” commentaries. Two important Heng Xian terms—heng 恆 and fu 復—are also Changes of Zhou hexagrams and possible connections are explored. Second, the Heng Xian account of the creation of names is compared with the “Attached Verbalizations” account of the creation of the Changes of Zhou system. Third, the roles played by knowing and desire in both Heng Xian and the (...) Changes of Zhou tradition are explored, with particular focus on potential points of similarity. Finally, insights gained through these comparisons are used to interpret the Heng Xian advice on initiating action. (shrink)
Episodic memory often is conceptualized as a uniquely human system of long-term memory that makes available knowledge accompanied by the temporal and spatial context in which that knowledge was acquired. Retrieval from episodic memory entails a form of first–person subjectivity called autonoetic consciousness that provides a sense that a recollection was something that took place in the experiencer’s personal past. In this paper I expand on this definition of episodic memory. Specifically, I suggest that (a) the core features assumed unique (...) to episodic memory are shared by semantic memory, (b) episodic memory cannot be fully understood unless one appreciates that episodic recollection requires the coordinated function of a number of distinct, yet interacting, “enabling” systems. Although these systems – ownership, self, subjective temporality, and agency – are not traditionally viewed as memorial in nature, each is necessary for episodic recollection and jointly they may be sufficient, and (c) the type of subjective awareness provided by episodic recollection (autonoetic) is relational rather than intrinsic – i.e., it can be lost in certain patient populations, thus rendering episodic memory content indistinguishable from the content of semantic long-term memory. (shrink)
Research on future-oriented mental time travel (FMTT) is highly active yet somewhat unruly. I believe this is due, in large part, to the complexity of both the tasks used to test FMTT and the concepts involved. Extraordinary care is a necessity when grappling with such complex and perplexing metaphysical constructs as self and time and their co-instantiation in memory. In this review, I first discuss the relation between future mental time travel and types of memory (episodic and semantic). I then (...) examine the nature of both the types of self-knowledge assumed to be projected into the future and the types of temporalities that constitute projective temporal experience. Finally, I argue that a person lacking episodic memory should nonetheless be able to imagine a personal future by virtue of (a) the fact that semantic, as well as episodic, memory can be self-referential, (b) autonoetic awareness is not a prerequisite for FMTT, and (c) semantic memory does, in fact, enable certain forms of personally-oriented FMTT. (shrink)
Recently, an associative learning account of cognitive control has been suggested (Verguts & Notebaert, 2009). In this so-called adaptation by binding theory, Hebbian learning of stimulus–stimulus and stimulus–response associations is assumed to drive the adaptation of human behavior. In this study, we evaluated the validity of the adaptation-by-binding account for the case of implicit learning of regularities within a stimulus set (i.e., the frequency of specific unit digit combinations in a two-digit number magnitude comparison task) and their association with a (...) particular response. Our data indicated that participants indeed learned these regularities and adapted their behavior accordingly. In particular, influences of cognitive control were even able to override the numerical distance effect—one of the most robust effects in numerical cognition research. Thus, the general cognitive processes involved in two-digit number magnitude comparison seem much more complex than previously assumed. Multi-digit number magnitude comparison may not be automatic and inflexible but influenced by processes of cognitive control being highly adaptive to stimulus set properties and task demands on multiple levels. (shrink)
In this study, we examine differences in cheating behaviors in higher education between two countries, namely the United States and the Czech Republic, which differ in many social, cultural and political aspects. We compare a recent (2011) Czech Republic survey of 291 students to that of 268 students in the US (Klein et al., 2007). For all items surveyed, CR students showed a higher propensity to engage in cheating. Additionally, we found more forms of serious cheating present in the Czech (...) sample. In all cases, the differences between the US and Czech samples were statistically significant. (shrink)
Several authors have recently argued that the content of pains (and bodily sensations more generally) is imperative rather than descriptive. I show that such an account can help resolve competing intuitions about phantom limb pain. As imperatives, phantom pains are neither true nor false. However, phantom limb pains presuppose falsehoods, in the same way that any imperative which demands something impossible presupposes a falsehood. Phantom pains, like many chronic pains, are thus commands that cannot be satisfied. I conclude by showing (...) that some of the negative psychological consequences of chronic pain are a direct consequence of their imperative nature. (shrink)
Table at Dimitri's Taverna : on seeking a philosophy of old age -- Old Greek's olive trees : on Epicurus's philosophy of fulfillment -- Deserted terrace : on time and worry beads -- Tasso's rain-spattered photographs : on solitary reflection -- Sirocco of youth's beauty : on existential authenticity -- Tintinnabulation of sheep bells : on mellowing to metaphysics -- Iphigenia's guest : on stoicism and old old age -- Burning boat in Kamini Harbor : on the timeliness of spirituality (...) -- Returning home : on a mindful old age. (shrink)
In their “The Prevalence of Mind-Body Dualism in Early China,” Slingerland and Chudek use a statistical analysis of the early Chinese corpus to argue for Weak Folk Dualism (WFD). We raise three methodological objections to their analysis. First, the change over time that they find is largely driven by genre. Second, the operationalization of WFD is potentially misleading. And, third, dating the texts they use is extremely controversial. We conclude with some positive remarks.
In this paper I argue that much of the confusion and mystery surrounding the concept of “self” can be traced to a failure to appreciate the distinction between the self as a collection of diverse neural components that provide us with our beliefs, memories, desires, personality, emotions, etc (the epistemological self) and the self that is best conceived as subjective, unified awareness, a point of view in the first person (ontological self). While the former can, and indeed has, been extensively (...) studied by researchers of various disciplines in the human sciences, the latter most often has been ignored – treated more as a place holder attached to a particular predicate of interest (e.g., concept, reference, deception, esteem, image, regulation, etc). These two aspects of the self, I contend, are not reducible – one being an object (the epistemological self) and the other a subject (the ontological self). Until we appreciate the difficulties of applying scientific methods and analysis to what cannot be reduced to an object of inquiry without stripping it of its essential aspect (its status as subject), progress on the “self”, taken as a pluralistic construct, will continue to address only one part of the problems we face in understanding this most fundamental aspect of human experience. (shrink)
Memory of past episodes provides a sense of personal identity — the sense that I am the same person as someone in the past. We present a neurological case study of a patient who has accurate memories of scenes from his past, but for whom the memories lack the sense of mineness. On the basis of this case study, we propose that the sense of identity derives from two components, one delivering the content of the memory and the other generating (...) the sense of mineness. We argue that this new model of the sense of identity has implications for debates about quasi-memory. In addition, articulating the components of the sense of identity promises to bear on the extent to which this sense of identity provides evidence of personal identity. (shrink)
I argue in the paper that classical chemistry is a science predominantly concerned with material substances, both useful materials and pure chemical substances restricted to scientific laboratory studies. The central epistemological and methodological status of material substances corresponds with the material productivity of classical chemistry and its way of producing experimental traces. I further argue that chemist’s ‘pure substances’ have a history, conceptually and materially, and I follow their conceptual history from the Paracelsian concept of purity to the modern concept (...) of pure stoichiometric compounds. The history of the concept of ‘pure substances’ shows that modern chemists’ concept of purity abstracted from usefulness rather than being opposed to it. Thus modern chemists’ interest in pure chemical substances does not presuppose a concept of pure science. (shrink)
Despite significant ethical advances in recent years, including professional developments in ethical review and codification, research deception continues to be a pervasive practice and contentious focus of debate in the behavioral sciences. Given the disciplines' generally stated ethical standards regarding the use of deceptive procedures, researchers have little practical guidance as to their ethical acceptability in specific research contexts. We use social contract theory to identify the conditions under which deception may or may not be morally permissible and formulate practical (...) recommendations to guide researchers on the ethical employment of deception in behavioral science research. (shrink)
The dual-track theory of moral reasoning has received considerable attention due to the neuroimaging work of Greene et al. Greene et al. claimed that certain kinds of moral dilemmas activated brain regions specific to emotional responses, while others activated areas specific to cognition. This appears to indicate a dissociation between different types of moral reasoning. I re-evaluate these claims of specificity in light of subsequent empirical work. I argue that none of the cortical areas identified by Greene et al. are (...) functionally specific: each is active in a wide variety of both cognitive and emotional tasks. I further argue that distinct activation across conditions is not strong evidence for dissociation. This undermines support for the dual-track hypothesis. I further argue that moral decision-making appears to activate a common network that underlies self-projection: the ability to imagine oneself from a variety of viewpoints in a variety of situations. I argue that the utilization of self-projection indicates a continuity between moral decision-making and other kinds of complex social deliberation. This may have normative consequences, but teasing them out will require careful attention to both empirical and philosophical concerns. (shrink)
Clinical neuroethics and neuroskepticism are recent entrants to the vocabulary of neuroethics. Clinical neuroethics has been used to distinguish problems of clinical relevance arising from developments in brain science from problems arising in neuroscience research proper. Neuroskepticism has been proposed as a counterweight to claims about the value and likely implications of developments in neuroscience. These two emergent streams of thought intersect within the practice of neurology. Neurologists face many traditional problems in bioethics, like end of life care in the (...) persistent vegetative state, determination of capacity in progressive dementia, and requests for assisted suicide in cognition-preserving neurodegenerative disease (like amyotrophic lateral sclerosis). Neurologists also look to be at the forefront of downstream clinical applications of neuroscience, like pharmacological enhancement of mental life. At the same time, the practice of neurology, concerned primarily with the structure, function, and treatment of the nervous system, has historically fostered a kind of skeptical attitude toward its own subject matter. Not all problems that appear primarily neurological are primarily neurological. This disciplinary skepticism is generally clinical in orientation and limited in scope. The rise of interest in clinical neuroethics and in neuroskepticsim generally suggests a possible broader application. The clinical skepticism of neurology provides impetus for thinking about the appropriate role for skepticism in clinical areas of neuroethics. After a brief review of neuroskepticism and clinical neuroethics, a taxonomy of clinical neuroskepticism is offered and reasons why a stronger rather than weaker form of clinical neuroskepticism is currently warranted. (shrink)
According to the Stoics, the psychology of adult human beings is unified in a striking sense: each of the soul's perceptive, discursive, and motivational functions belongs to the single faculty of reason. Reason, in turn, is constituted by a set of conceptions (ennoiai) and preconceptions (prolêpseis) acquired on the basis of experience. The few secure sources that bear on this theory in the early Stoa suggest that certain of these empirically acquired conceptions function, somehow, as a criterion of truth in (...) human perception and rational cognition generally. The conceptions associated with this cognitive role are sometimes said to be shared or common (koinai).These tenuous but intriguing claims are the focus of .. (shrink)
Klein, Renate The practice of surrogacy in Australia has been controversial since its beginning in the late 1980s. In 1988, the famous 'Kirkman case' in the state of Victoria put surrogacy on the national map. This was a two-sisters surrogacy - Linda and Maggie Kirkman and the resulting baby Alice - in which power differences between the two women were extraordinarily stark: Maggie was the glamorous and well spoken woman of the world; Linda who carried the baby, was the demure (...) school teacher in child-like frocks and pig tails. Their IVF doctor applauded altruistic surrogacy. He called it 'gestational surrogacy' and proclaimed that if the so-called surrogate mother didn't use her own eggs, thus wasn't the baby's 'genetic' mother, no attachment would ensue! This statement is haunting us to this day. It is patently absurd: as a baby grows in a woman's body over the nine months of the pregnancy, it is hard to see why the 24/7 presence of the baby inside her body, its growth, its interaction with her (movements, the baby's kicking) would be any different whether s/he has the mother's genes! (shrink)
In part 1 of the paper, I develop a Platonic business ethic, emphasizing Plato’s Republic. I approach business ethics from a virtue ethics position, and I attempt to show that a Platonic craftsmanship model infuses a corporation with a type of managerial wisdom and justice, molds temperate and courageous corporate characters, and entails a morally fine type of self-interest. I also show that it is basic to two influential management theories.In part 2, I use Amartya Sen’s Development as Freedom to (...) show that the craftsmanship model is central to the concept of development. This concept is important in ethical discussions of both globalization and transnational corporations. Thus, I attempt to globalize Platonic business ethics using a craftsmanship model.In my concluding remarks, I attempt to show that the Plato/Sen position on development can be illustrated by both American and non-American capitalist firms functioning in our globalized world. (shrink)
I explore certain interconnections and commonalities among technology, corporations, and contemporary globalization in order to best understand the dangerous ethical and social consequences that accrue from them. I begin by discussing the notion of means becoming ends. Technology as means and corporate instrumental values tend to become endsin-themselves. I then suggest that technologist’s and corporate manager’s quantitative methods are ill-equipped to deal with questions of intrinsic value or ends, which are qualitative. Moreover, “development,” a key term in globalization discussions, is (...) often defined quantitatively (in economic terms) rather than qualitatively. I argue that this view is too narrow. Next, I discuss limiting autonomy as an important issue common to technology, corporations, and contemporary globalization. Material progress as a goal common to technology, corporations, and contemporary globalization is also considered. Technological mistakes and a neo-liberal, laissez-faire economy are said to be self-corrective, and this feature is used to support the notion of material progress. I argue that this has proved to be too optimistic. In the last section, I use certain contemporary leadership theorists to criticizeKenneth Galbraith’s and Peter Drucker’s views on corporate governance by technocratic specialists. I also discuss recent developments of the concept of technological assessment and related work by TU Delft researchers. (shrink)
To understand the human capacity for psychological altruism, one requires a proper understanding of how people actually think and feel. This paper addresses the possible relevance of recent findings in experimental economics and neuroeconomics to the philosophical controversy over altruism and egoism. After briefly sketching and contextualizing the controversy, we survey and discuss the results of various studies on behaviourally altruistic helping and punishing behaviour, which provide stimulating clues for the debate over psychological altruism. On closer analysis, these studies prove (...) less relevant than originally expected because the data obtained admit competing interpretations – such as people seeking fairness versus people seeking revenge. However, this mitigated conclusion does not preclude the possibility of more fruitful research in the area in the future. Throughout our analysis, we provide hints for the direction of future research on the question. (shrink)
fMRI promises to uncover the functional structure of the brain. I argue, however, that pictures of ‘brain activity' associated with fMRI experiments are poor evidence for functional claims. These neuroimages present the results of null hypothesis significance tests performed on fMRI data. Significance tests alone cannot provide evidence about the functional structure of causally dense systems, including the brain. Instead, neuroimages should be seen as indicating regions where further data analysis is warranted. This additional analysis rarely involves simple significance testing, (...) and so justified skepticism about neuroimages does not provide reason for skepticism about fMRI more generally. (shrink)
Functional neuroimaging (NI) technologies like Positron Emission Tomography and functional Magnetic Resonance Imaging (fMRI) have revolutionized neuroscience, and provide crucial tools to link cognitive psychology and traditional neuroscientific models. A growing discipline of 'neurophilosophy' brings fMRI evidence to bear on traditional philosophical issues such as weakness of will, moral psychology, rational choice, social interaction, free will, and consciousness. NI has also attracted critical attention from psychologists and from philosophers of science. I review debates over the evidential status of fMRI, including (...) the differences between brain scans and ordinary images, the legitimacy of forward inference and reverse inference, and deductive versus probabilistic accounts of NI evidence. I conclude with a discussion of fMRI as exploratory rather than confirmatory evidence, linking this debate to the growing literature on cognitive ontology. (shrink)
Maura Tumulty has raised two objections to my imperative account of pain.1 First, she argues that there is a disanalogy between pains and other imperative sensations like itch, hunger, and thirst. Suppose (with Hall) one thinks that an itch says “Scratch here!”2 Scratch the itch, and it dutifully disappears. Not so with pain. The pain of a broken ankle has the content ‘Do not put weight on that ankle!’ Yet the coddled ankle still throbs: obeying the imperative does not extinguish (...) it. Second, Tumulty argues that the imperative account cannot handle certain pains, particularly pains of the deep viscera. On my account, pains proscribe against taking action with the painful body part. Yet some pains are associated with body parts over which we have no control. Kidney stones cause intense pain, but I cannot (voluntarily) control my kidney. What action, then, could that pain possibly proscribe? Lacking such a story, it is hard to say (as I do) that pains are exhausted by their imperative content. (shrink)
This is the third draft of a paper that aims to clarify the apparent contradictions in the views presented in certain standards and other specifications of health informatics systems, contradictions which come to light when the latter are evaluated from the perspective of realist philosophy. One of the origins of this document was Klein’s discussion paper of 2005-07-02 entitled “Conceptology vs Reality” and the responses from Smith, as well as the several hours of discussions during the 2005 MIE meeting in (...) Geneva. (shrink)
Este artigo analisa a teoria da democracia de Carl Schmitt e procura destacar, a partir disso, suas virtudes e deficiências. O texto é dividido em duas partes. Na primeira sustenta-se que a teoria schmittiana de democracia se desenrola em dois níveis diferentes, um nível conceitual, essencialmente analítico, e um nível fenomênico, que segundo Schmitt seria meramente descritivo. Nesse horizonte pode-se compreender melhor a teoria schmittiana da democracia e sua crítica à democracia parlamentar. Na segunda parte, apresenta-se algumas críticas à posiçáo (...) de Schmitt. (shrink)
Taking stock of interdisciplinarity as it nears its century mark, the Oxford Handbook of Interdisciplinarity constitutes a major new reference work on the topic of interdisciplinarity, a concept of growing academic and societal importance. -/- Interdisciplinarity is fast becoming as important outside academia as within. Academics, policy makers, and the general public are seeking methods and approaches to help organize and integrate the vast amounts of knowledge being produced, both within research and at all levels of education. The Oxford Handbook (...) of Interdisciplinarity provides a synoptic overview of the current state of interdisciplinary research, education, administration and management, and problem solving-knowledge that spans the disciplines and interdisciplinary fields, and crosses the space between the academic community and society at large. Its 37 chapters and 14 case studies provide a snapshot of the state of knowledge integration as interdisciplinarity approaches its century mark. -/- This groundbreaking text offers by far the most broad-based account of inter- and transdisciplinarity to date. Its original essays bring together many of the globe's leading thinkers on interdisciplinary research, education, and the institutional aspects of interdisciplinarity, as well as extended reflections on how knowledge is integrated into societal needs. (shrink)
ABSTRACT. Associationist psychologists of the late 19th-century premised their research on a fundamentally Humean picture of the mind. So the very idea of mental science was called into question when T. H. Green, a founder of British idealism, wrote an influential attack on Hume’s Treatise. I first analyze Green’s interpretation and criticism of Hume, situating his reading with respect to more recent Hume scholarship. I focus on Green’s argument that Hume cannot consistently admit real ideas of spatial relations. I then (...) argue that William James’s early work on spatial perception attempted to vindicate the new science of mind by showing how to avoid the problems Green had exposed in Hume’s empiricism. James’s solution involved rejecting a basic Humean assumption—that perceptual experience is fundamentally composed of so-called minima sensibilia, or psychological atoms. The claim that there are no psychological atoms is interesting because James supported it with experimental data rather than (as commentators typically suppose) with introspective description or a priori argument. James claimed to be the real descendant of British empiricism on grounds that his anti-atomistic model of perception fortified what Green had perhaps most wanted to demolish—the prospect of using empirical, scientific methods in the study of mind. (shrink)
Unlike the overall framework of Ernest Nagel's work on reduction, his theory of intertheoretic connection still has life in it. It handles aptly cases where reduction requires complex representation of a target domain. Abandoning his formulation as too liberal was a mistake. Arguments that it is too liberal at best touch only Nagel's deductivist theory of explanation, not his condition of connectability. Taking this condition seriously gives a powerful view of reduction, but one which requires us to index explanatory power (...) to sciences as they are formulated at particular times. While we may thereby reduce more than philosophers have supposed, we must abandon hope (as Nagel did) of saying anything useful about reductionism. (shrink)
The president’s science advisor was formerly established in the days following the Soviet launch of Sputnik at the height of the Cold War, creating an impression of scientists at the center of presidential power. However, since that time the role of the science advisor has been far more prosaic, with a role that might be more aptly described as a coordinator of budgets and programs, and thus more closely related to the functions of the Office of Management and Budget than (...) the development of presidential policy. This role dramatically enhances the position of the scientific community to argue for its share of federal expenditures. At the same time, scientific and technological expertise permeates every function of government policy and politics, and the science advisor is only rarely involved in wider White House decision making. The actual role of the science advisor as compared to its heady initial days, in the context of an overall rise of governmental expertise, provides ample reason to reconsider the role of the presidential science advisor, and to set our expectations for that role accordingly. (shrink)
ABSTRACT. May scientists rely on substantive, a priori presuppositions? Quinean naturalists say "no," but Michael Friedman and others claim that such a view cannot be squared with the actual history of science. To make his case, Friedman offers Newton's universal law of gravitation and Einstein's theory of relativity as examples of admired theories that both employ presuppositions (usually of a mathematical nature), presuppositions that do not face empirical evidence directly. In fact, Friedman claims that the use of such presuppositions is (...) a hallmark of "science as we know it." But what should we say about the special sciences, which typically do not rely on the abstruse formalisms one finds in the exact sciences? I identify a type of a priori presupposition that plays an especially striking role in the development of empirical psychology. These are ontological presuppositions about the type of object a given science purports to study. I show how such presuppositions can be both a priori and rational by investigating their role in an early flap over psychology's contested status as a natural science. The flap focused on one of the field's earliest textbooks, William James's Principles of Psychology. The work was attacked precisely for its reliance on a priori presuppositions about what James had called the "mental state," psychology's (alleged) proper object. I argue that the specific presuppositions James packed into his definition of the "mental state" were not directly responsible to empirical evidence, and so in that sense were a priori; but the presuppositions were rational in that they were crafted to help overcome philosophical objections (championed by neo-Hegelians) to the very idea that there can be a genuine science of mind. Thus, my case study gives an example of substantive, a priori presuppositions being put to use—to rational use—in the special sciences. In addition to evaluating James's use of presuppositions, my paper also offers historical reflections on two different strands of pragmatist philosophy of science. One strand, tracing back through Quine to C. S. Peirce, is more naturalistic, eschewing the use of a priori elements in science. The other strand, tracing back through Kuhn and C. I. Lewis to James, is more friendly to such presuppositions, and to that extent bears affinity with the positivist tradition Friedman occupies. (shrink)
Multiply realizable kinds are scientifically problematic, for it appears that we should not expect discoveries about them to hold of other members of that kind. As such, it looks like MR kinds should have no place in the ontology of the special sciences. Many resist this conclusion, however, because we lack a positive account of the role that certain realization-unrestricted terms play in special science explanations. I argue that many such terms actually pick out idealizing models. Idealizing explanation has many (...) of the features normally associated with explanation by MR kinds. As idealized models are usually mere possibilia, such explanations do not run afoul of the metaphysical problems that plague MR kinds. (shrink)
Consciousness supervenes on activity; computation supervenes on structure. Because of this, some argue, conscious states cannot supervene on computational ones. If true, this would present serious difficulties for computationalist analyses of consciousness (or, indeed, of any domain with properties that supervene on actual activity). I argue that the computationalist can avoid the Superfluous Structure Problem (SSP) by moving to a dispositional theory of implementation. On a dispositional theory, the activity of computation depends entirely on changes in the intrinsic properties of (...) implementing material. As extraneous structure is not required for computation, a system can implement a program running on some but not all possible inputs. Dispositional computationalism thus permits episodes of computational activity that correspond to potential episodes of conscious awareness. The SSP cannot be motivated against this account, and so computationalism may be preserved. (shrink)