in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions oblige highly associated categories (e.g., liower + pleasant) to share a response key, performance is faster than when less associated categories (e.g., insect + pleasant) share a key. This performance difference implicitly measures differential association of the 2 concepts with the attribute. In 3..
Neuropsychological research on the neural basis of behaviour generally posits that brain mechanisms will ultimately sufﬁce to explain all psychologically described phenomena. This assumption stems from the idea that the brain is made up entirely of material particles and ﬁelds, and that all causal mechanisms relevant to neuroscience can therefore be formulated solely in terms of properties of these elements. Thus, terms having intrinsic mentalistic and/or experiential content (e.g. ‘feeling’, ‘knowing’ and ‘effort’) are not included as primary causal factors. This (...) theoretical restriction is motivated primarily by ideas about the natural world that have been known to be fundamentally incorrect for more than three-quarters of a century. Contemporary basic physical theory differs profoundly from classic physics on the important matter of how the consciousness of human agents enters into the structure of empirical phenomena. The new principles contradict the older idea that local mechanical processes alone can account for the structure of all observed empirical data. Contemporary physical theory brings directly and irreducibly into the overall causal structure certain psychologically described choices made by human agents about how they will act. This key development in basic physical theory is applicable to neuroscience, and it provides neuroscientists and psychologists with an alternative conceptual framework for describing neural processes. Indeed, owing to certain structural features of ion channels critical to synaptic function, contemporary physical theory must in principle be used when analysing human brain dynamics. The new framework, unlike its classic-physics-based predecessor, is erected directly upon, and is compatible with, the prevailing principles of physics. It is able to represent more adequately than classic concepts the neuroplastic mechanisms relevant to the growing number of empirical studies of the capacity of directed attention and mental effort to systematically alter brain function.. (shrink)
The data emerging from the clinical and brain studies described above suggest that, in the case of OCD, there are two pertinent brain mechanisms that are distinguishable both in terms of neuro-dynamics and in terms of the conscious experiences that accompany them. These mechanisms can be characterized, on anatomical and perhaps evolutionary grounds, as a lower-level and a higher-level mechanism. The clinical treatment has, when successful, an activating effect on the higher-level mechanism, and a suppressive effect on the lower-level one.
In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is − log2 of the probability that the object is obtained by means of a program whose successive bits are chosen by flipping an unbiased coin. Here a theory of the entropy of recursively enumerable sets of objects (...) is proposed which includes the previous theory as the special case of sets having a single element. The primary concept in the generalized theory is the probability that a computing machine enumerates a given set when its program is manufactured by coin flipping. The entropy of a set is defined to be − log2 of this probability. (shrink)
clusions are only probably correct. On the other hand, algorithmic information theory provides a precise mathematical definition of the notion of random or patternless sequence. In this paper we shall describe conditions under which if the sequence of coin tosses in the Solovay– Strassen and Miller–Rabin algorithms is replaced by a sequence of heads and tails that is of maximal algorithmic information content, i.e., has maximal algorithmic randomness, then one obtains an error-free test for primality. These results are only of (...) theoretical interest, since it is a manifestation of the G¨ odel incompleteness phenomenon that it is impossible to “certify” a sequence to be random by means of a proof, even though most sequences have this property. Thus by using certified random sequences one can in principle, but not in practice, convert probabilistic tests for primality into deterministic ones. (shrink)
The Negation Problem states that expressivism has insufficient structure to account for the various ways in which a moral sentence can be negated. We argue that the Negation Problem does not arise for expressivist accounts of all normative language but arises only for the specific examples on which expressivists usually focus. In support of this claim, we argue for the following three theses: 1) A problem that is structurally identical to the Negation Problem arises in non-normative cases, and this problem (...) is solved once the hidden quantificational structure involved in such cases is uncovered; 2) the terms ‘required’, ‘permissible’, and ‘forbidden’ can also be analyzed in terms of hidden quantification over a normative primitive, and the Negation Problem disappears once this hidden structure is uncovered; 3) the Negation Problem does not arise for normative language that has no hidden quantificational structure. We conclude that the Negation Problem is not really a problem about expressivism at all but is rather a feature of the quantificational structure of the required, permitted, and forbidden. (shrink)
The data emerging from the clinical and brain studies described above suggest that, in the case of OCD, there are two pertinent brain mechanisms that are distinguishable both in terms of neuro dynamics and in terms of the conscious experiences that accompany them. These mechanisms can be characterized, on anatomical and perhaps evolutionary grounds, as a lower level and a higher level mechanism. The clinical treatment has, when successful, an activating effect on the higher level mechanism, and a suppressive effect (...) on the lower level one. (shrink)
Terraforming is a process of planetary engineering by which the extant environment of a planet is manipulated so as to produce an Earth-like ecosystem. This paper explores the ethical questions about the exploration of space and the exploitation of space resources that arise in the consideration of terraforming. I argue that space advocacy (including the pursuit of terraforming) and environmentalism are mutually beneficial endeavors. I show that the moral status of terraforming a planet, at least under traditional anthropocentric and non-anthropocentric (...) positions, is sensitive to whether life exists on the candidate planet. I also examine several attempts—due to Holmes Rolston, Keekok Lee, Alan Marshall, and Robert Sparrow—to show that terraforming a planet would be impermissible even if the planet was not home to life. I argue that no attempt provides compelling reasons for the supposition that terraforming is morally impermissible. (shrink)
I attempt to raise questions regarding elements of systematics—primarily in the realm of phylogenetic reconstruction—in order to provoke discussion on the current state of affairs in this discipline, and also evolutionary biology in general: e.g., conceptions of homology and homoplasy, hypothesis testing, the nature of and objections to Hennigian “phylogenetic systematics”, and the schism between (neo)Darwinian descendants of the “modern evolutionary synthesis” and their supposed antagonists, cladists and punctuationalists.
We propose that the sudden emergence of metazoans during the Cambrian was due to the appearance of a complex genome architecture that was capable of computing. In turn, this made defining recursive functions possible. The underlying molecular changes that occurred in tandem were driven by the increased probability of maintaining duplicated DNA fragments in the metazoan genome. In our model, an increase in telomeric units, in conjunction with a telomerase-negative state and consequent telomere shortening, generated a reference point equivalent to (...) a non-reversible counting mechanism. (shrink)
The problem of the unity of the proposition asks what binds together the constituents of a proposition into a fully formed proposition that provides truth conditions for the assertoric sentence that expresses it, rather than merely a set of objects. Hanks’ solution is to reject the traditional distinction between content and force. If his theory is successful, then there is a plausible extension of it that readily solves the Frege–Geach problem for normative propositions. Unfortunately Hanks’ theory isn’t successful, but it (...) does point to significant connections between expressivism, unity, and embedding. (shrink)
The moral obligation to support space exploration follows from our obligations to protect the environment and to survive as a species. It can be justified through three related arguments: one supporting space exploration as necessary for acquiring resources, and two illustrating the need for space technology in order to combat extraterrestrial threats such as meteorite impacts. Three sorts of objections have been raised against this obligation. The first are objections alleging that supporting space exploration is impractical. The second is the (...) widely held notion that space exploration and environmentalism are at odds with one another. Finally, there are two objections to using space resources that Robert Sparrow has raised on the topic of terraforming. The obligation to support space exploration can be defended in at least three ways: (1) the "argument from resources," that space exploration is useful for amplifying our available resources; (2) the "argument from asteroids," that space exploration is necessary for protecting the environment and its inhabitants from extraterrestrial threats such as meteorite impacts; and (3) the "argument from solar burnout," that we are obligated to pursue interstellar colonization in order to ensure long-term human survival. (shrink)
Abstract: Recently, the idea that every hypothetical imperative must somehow be 'backed up' by a prior categorical imperative has gained a certain influence among Kant interpreters and ethicists influenced by Kant. Since instrumentalism is the position that holds that hypothetical imperatives can by themselves and without the aid of categorical imperatives explain all valid forms of practical reasoning, the influential idea amounts to a rejection of instrumentalism as internally incoherent. This paper argues against this prevailing view both as an interpretation (...) of Kant and as philosophical understanding of practical reason. In particular, it will be argued that many of the arguments that claim to show that hypothetical imperatives must be backed up by categorical imperatives mistakenly assume that the form of practical reasoning must itself occur as a premise within the reasoning. An alternative to this assumption will be offered. I will conclude that while instrumentalism may well be false, there is no reason to believe it is incoherent. (shrink)
Although the construction of neo-Darwinism grew out of Thomas Hunt Morgan's melding of Darwinism and Mendelism, his evidence did not soley support a model of gradual change. To the contrary, he was confronted with observations that could have led him to a more "evo-devo" understanding of the emergence of novel features. Indeed, since Morgan was an embryologist before he became a fruit-fly geneticist, one would have predicted that the combination of these two lines of research would have resulted in early (...) formulations of concepts relevant to evolutionary developmental biology. It is thus of interest to review Morgan's thought processes and arguments for at first rejecting both Darwinism and Mendelism, and then for later dismissing data that would have yielded a model of rapid morphological change in favor of a model of change based on the accumulation of minor mutations and their morphological consequences. (shrink)
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains (...) to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception. (shrink)
One of the fundamental questions raised by Ruchkin, Grafman, Cameron, and Berndt's (Ruchkin et al.'s) interpretation of no distinct specialized neural networks for short-term storage buffers and long-term memory systems, is that of the link between perception and memory processes. In this framework, we take the opportunity in this commentary to discuss a specific working memory task involving percept formation, temporary retention, auditory imagery, and the attention-based maintenance of information, that is, the verbal transformation effect.
Robert Chambers and Thomas Henry Huxley helped popularize science by writing for general interest publications when science was becoming increasingly professionalized. A non-professional, Chambers used his family-owned Chambers' Edinburgh Journal to report on scientific discoveries, giving his audience access to ideas that were only available to scientists who regularly attended professional meetings or read published transactions of such forums. He had no formal training in the sciences and little interest in advancing the professional status of scientists; his course of action (...) was determined by his disability and interest in scientific phenomena. His skillful reporting enabled readers to learn how the ideas that flowed from scientific innovation affected their lives, and his series of article in the Journal presenting his rudimentary ideas on evolution, served as a prelude to his important popular work, Vestiges of the Natural History of Creation. Huxley, an example of the new professional class of scientists, defended science and evolution from attacks by religious spokesmen and other opponents of evolution, informing the British public about science through his lectures and articles in such publications as Nineteenth Century. He understood that by popularizing scientific information, he could effectively challenge the old Tory establishment -- with its orthodox religious and political views -- and promote the ideas of the new class of professional scientists. In attempting to transform British society, he frequently came in conflict with theologians and others on issues in which science and religion seemed to contradict each other but refused to discuss matters of science with non-professionals like Chambers, whose popular writing struck a more resonant chord with working class readers. (shrink)
We agree with MacNeilage's claim that speech stems from a volitional vocalization pathway between the cingulate and the supplementary motor area (SMA). We add the vocal self- monitoring system as the first recruitment of the Broca-Wernicke circuit. SMA control for “frames” is supported by wrong consonant-vowel recurring utterance aphasia and an imaging study of quasi-reiterant speech. The role of Broca's area is questioned in the emergence of “content,” because a primary motor mapping, embodying peripheral constraints, seems sufficient. Finally, we reject (...) a uniquely peripheral account of speech emergence. (shrink)
: State legislatures consider numerous bills to regulate managed care organizations. After identifying the legal, political, and economic barriers to state reform efforts, the paper assesses recent types of state regulation, particularly mandated benefits and disclosure requirements. Two prerequisites to future reform, coalition building and the diffusion of information about managed care, are analyzed.