We must distinguish between what can be described or interpreted as X and what really is X. Otherwise we are just doing hermeneutics. It won't do simply to declare that the thermostat turns on the furnace because it feels cold or that the chess-playing computer program makes a move because it thinks it should get its queen out early. In what does real feeling and thinking consist?
Differences can be perceived as gradual and quantitative, as with different shades of gray, or they can be perceived as more abrupt and qualitative, as with different colors. The first is called continuous perception and the second categorical perception. Categorical perception (CP) can be inborn or can be induced by learning. Formerly thought to be peculiar to speech and color perception, CP turns out to be far more general, and may be related to how the neural networks in our brains (...) detect the features that allow us to sort the things in the world into their proper categories, "warping" perceived similarities and differences so as to compress some things into the same category and separate others into different categories. (shrink)
The experimental analysis of naming behavior can tell us exactly the kinds of things Horne & Lowe (H & L) report here: (1) the conditions under which people and animals succeed or fail in naming things and (2) the conditions under which bidirectional associations are formed between inputs (objects, pictures of objects, seen or heard names of objects) and outputs (spoken names of objects, multimodal operations on objects). The "stimulus equivalence" that H & L single out is really just the (...) reflexive, symmetric and transitive property of pairwise associations among the above. This is real and of some interest, but it unfortunately casts very little light on symbolization and language in general, and naming capacity in particular. The associative equivalence between name and object is trivial in relation to the real question, which is: How do we (or any system that can do it) manage to connect names to things correctly (Harnad 1987, 1990, 1992)? The experimental analysis of naming behavior begs this question entirely, simply taking it for granted that the connection is somehow successfully accomplished. (shrink)
According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, every physical implementation of the right (...) symbol system will have mental states. (shrink)
1.1 The predominant approach to cognitive modeling is still what has come to be called "computationalism" (Dietrich 1990, Harnad 1990b), the hypothesis that cognition is computation. The more recent rival approach is "connectionism" (Hanson & Burr 1990, McClelland & Rumelhart 1986), the hypothesis that cognition is a dynamic pattern of connections and activations in a "neural net." Are computationalism and connectionism really deeply different from one another, and if so, should they compete for cognitive hegemony, or should they collaborate? These (...) questions will be addressed here, in the context of an obstacle that is faced by computationalism (as well as by connectionism if it is either computational or seeks cognitive hegemony on its own): The symbol grounding problem (Harnad 1990). (shrink)
Human cognition is not an island unto itself. As a species, we are not Leibnizian Monads independently engaging in clear, Cartesian thinking. Our minds interact. That's surely why our species has language. And that interactivity probably constrains both what and how we think.
Suppose Boeing 747s grew on trees. They would first sprout as embryonic planes, the size of an acorn. Then they would grow until they reached full size, when they would plop off the trees, ready to fly. Suppose also that we knew how to feed and care for them, how to make minor repairs, and of course how to fly them. But let us suppose that all of this transpired at a very early stage in our scientific history, when we (...) did not yet understand the physics or the engineering of flight: Hence the phenomenon was a complete mystery to us. (To keep things simple, let us suppose that no other entity on earth could fly, only 747s.) And for the last ingredient in this fantasy world, suppose that computers likewise grew on trees, and we knew how to use and fix them too. (shrink)
Peer Review and Copyright each have a double role: Formal refereeing protects (R1) the author from publishing and (R2) the reader from reading papers that are not of sufficient quality. Copyright protects the author from (C1) theft of text and (C2) theft of authorship. It has been suggested that in the electronic medium we can dispense with peer review, "publish" everything, and let browsing and commentary do the quality control. It has also been suggested that special safeguards and laws may (...) be needed to enforce copyright on the Net. I will argue, based on 20 years of editing Behavioral and Brain Sciences, a refereed (paper) journal of peer commentary, 8 years of editing Psycoloquy, a refereed electronic journal of peer commentary, and 1 year of implementing CogPrints, an electronic archive of unrefereed preprints and refereed reprints in the cognitive sciences modeled on the Los Alamos Physics Eprint Archive, that (i) peer commentary is a supplement, not a substitute, for peer review, (ii) the authors of refereed papers, who get and seek no royalties from the sale of their texts, only want protection from theft of authorship on the Net, not from theft of text, which is a victimless crime, and hence (iii) the trade model (subscription, site license or pay- per-view) should be replaced by author page-charges to cover the much reduced cost of implementing peer review, editing and archiving on the Net, in exchange for making the learned serial corpus available for free for all forever. (shrink)
In innate Categorical Perception (CP) (e.g., colour perception), similarity space is "warped," with regions of increased within-category similarity (compression) and regions of reduced between-category similarity (separation) enh ancing the category boundaries and making categorisation reliable and all-or-none rather than graded. We show that category learning can likewise warp similarity space, resolving uncertainty near category boundaries. Two Hard and two Easy texture learning tasks were compared: As predicted, there were fewer successful Learners with the Hard task, and only the successful Learners (...) of the Hard task exhibited CP. In a second experiment, the Easy task was made Hard by making the corrective feedback during learn ing only 90% reliable; this too generated CP. The results are discussed in relation to supervised, unsupervised and dual-mode models of category learning and representation.The world is full of things that vary in their similarity and interconfusability.O rganisms must somehow resolve this confusion, sorting and acting upon things adaptively. It might be important, for example, to learn which kinds of mushrooms are poisonous and which are safe to eat, minimising the confusion between them (Greco, Cangelosi & Harnad 1997). (shrink)
This is a paperback reissue of a 1988 special issue of Cognition - dated but still of interest. The book consists of three chapters, each making one major negative point about connectionism. Fodor & Pylyshyn (F&P) argue that connectionist networks (henceforth 'nets') are not good models for cognition because they lack 'systematicity', Pinker & Price (P&P) argue that nets are not good substitutes for rule-based models of linguistic ability, and Lachter & Bever (L&B) argue that nets can only model the (...) associative relations between cognitive structures, not the structures themselves. (shrink)
This article is a critique of: The "Green" and "Gold" Roads to Open Access: The Case for Mixing and Matching Jean-Claude Guédon Serials Review 30(4) 2004 http://dx.doi.org/10.1016/j.serrev.2004.09.005 Open Access (OA) means: free online access to all peer-reviewed journal articles.
We are accustomed to thinking that a primrose is "concrete" and a prime number is "abstract," that "roundness" is more abstract than "round," and that "property" is more abstract than "roundness." In reality, the relation between "abstract" and "concrete" is more like the (non)relation between "abstract" and "concave," "concrete" being a sensory term [about what something feels like] and "abstract" being a functional term (about what the sensorimotor system is doing with its input in order to produce its output): Feelings (...) and things are correlated, but otherwise incommensurable. Everything that any sensorimotor system such as ourselves manages to categorize successfully is based on abstracting sensorimotor "affordances" (invariant features). The rest is merely a question of what inputs we can and do categorize, and what we must abstract from the particulars of each sensorimotor interaction in order to be able to categorize them correctly. To categorize, in other words, is to abstract. And not to categorize is merely to experience. Borges's Funes the Memorious, with his infinite, infallible rote memory, is a fictional hint at what it would be like not to be able to categorize, not to be able to selectively forget and ignore most of our input by abstracting only its reliably recurrent invariants. But a sensorimotor system like Funes would not really be viable, for if something along those lines did exist, it could not categorize recurrent objects, events or states, hence it could have no language, private or public, and could at most only feel, not function adaptively (hence survive). Luria's "S" in "The Mind of a Mnemonist" is a real-life approximation whose difficulties in conceptualizing were directly proportional to his difficulties in selectively forgetting and ignoring. Watanabe's "Ugly Duckling Theorem" shows how, if we did not selectively weight some properties more heavily than others, everything would be equally (and infinitely and indifferently) similar to everything else. Miller's "Magical Number Seven Plus or Minus Two" shows that there are (and must be) limitations on our capacity to process and remember information, both in our capacity to discriminate relatively (detect sameness/difference, degree-of-similarity) and in our capacity to discriminate absolutely (identify, categorize, name), The phenomenon of categorical perception shows how selective feature-detection puts a Whorfian "warp" on our feelings of similarity in the service of categorization, compressing within-category similarities and expanding between-category differences by abstracting and selectively filtering inputs through their invariant features, thereby allowing us to sort and name things reliably. Language does allow us to acquire categories indirectly through symbolic description.... (shrink)
Libet, Gleason, Wright, & Pearl (1983) asked participants to report the moment at which they freely decided to initiate a pre-specified movement, based on the position of a red marker on a clock. Using event-related potentials (ERPs), Libet found that the subjective feeling of deciding to perform a voluntary action came after the onset of the motor “readiness potential,” RP). This counterintuitive conclusion poses a challenge for the philosophical notion of free will. Faced with these findings, Libet (1985) proposed that (...) conscious volitional control might operate as a selector and a controller of volitional processes rather than as an initiator of them. (shrink)
Maybe it's just because hermeneutics is so much in vogue these days, but I've lately come to believe that the secret of the meaning of life is revealed by certain jokes from the state of Maine. The pertinent one on this occasion (and some of you will recognize it as one I've invoked before) is the one that goes "How's your wife? to which the appropriate deadpan downeaster reply is: "Compared to what?".
SUMMARY: Universities (the universal research-providers) as well as research funders (public and private) are beginning to make it part of their mandates to ensure not only that researchers conduct and publish peer-reviewed research (“publish or perish”), but that they also make it available online, free for all. This is called Open Access (OA), and it maximizes the uptake, impact and progress of research by making it accessible to all potential users worldwide, not just those whose universities can afford to subscribe (...) to the journal in which it is published. Researchers can provide OA to their published journal articles by self-archiving them in their own university’s online repository. Students and junior faculty – the next generation of research providers and consumers -- are in a position to help accelerate the adoption of OA self-archiving mandates by their universities, ushering in the era of universal OA. (shrink)
Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be (...) the ground level out of which higher-order categories are constructed; nets are one possible candidate for the mechanism that learns the sensorimotor invariants that connect arbitrary names (elementary symbols?) to the nonarbitrary shapes of objects. This paper examines how and why such compression/expansion effects occur in neural nets. (shrink)
After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a onedimensional continuum and then to sort them into 3 categories, CP arises as a (...) natural side-effect because of four factors: (1) Maximal interstimulus separation in hidden-unit space during autoassociation learning, (2) movement toward linear separability during categorization learning, (3) inverse-distance repulsive force exerted by the between-category boundary, and (4) the modulating effects of input iconicity, especially in interpolating CP to untrained regions of the continuum. Once similarity space has been "warped" in this way, the compressed and separated "chunks" have symbolic labels which could then be combined into symbol strings that constitute propositions about objects. The meanings of such symbolic representations would be "grounded" in the system's capacity to pick out from their sensory projections the object categories that the propositions were about. (shrink)
Dalgaard's recent article  argues that the part of the Web that constitutes the scientific literature is composed of increasingly linked archives. He describes the move in the online communications of the scientific community towards an expanding zone of secondorder textuality, of an evolving network of texts commenting on, citing, classifying, abstracting, listing and revising other texts. In this respect, archives are becoming a network of texts rather than simply a classified collection of texts. He emphasizes the definition of hypertext (...) as multi-linear text, in contrast to the simple definition of a hypertext as 'a document with links in'. (shrink)
Do scientists agree? It is not only unrealistic to suppose that they do, but probably just as unrealistic to think that they ought to. Agreement is for what is already established scientific history. The current and vital ongoing aspect of science consists of an active and often heated interaction of data, ideas and minds, in a process one might call "creative disagreement." The "scientific method" is largely derived from a reconstruction based on selective hindsight. What actually goes on has much (...) less the flavor of a systematic method than of trial and error, conjecture, chance, competition and even dialectic. (shrink)
Harnad accepts the picture of computation as formalism, so that any implementation of a program - thats any implementation - is as good as any other; in fact, in considering claims about the properties of computations, the nature of the implementing system - the interpreter - is invisible. Let me refer to this idea as 'Computationalism'. Almost all the criticism, claimed refutation by Searle's argument, and sharp contrasting of this idea with others, rests on the absoluteness of this separation between (...) a computational system and its implementation. (shrink)
I have a feeling that when Posterity looks back at the last decade of the 2nd A.D. millennium of scholarly and scientific research on our planet, it may chuckle at us. It is not the pace of our scholarly and scientific research that will look risible, nor the tempo of technological change. On the contrary, the astonishing speed and scale of both will make the real anomaly look all the more striking.
Brian Rotman argues that (one) “mind” and (one) “god” are only conceivable, literally, because of (alphabetic) literacy, which allowed us to designate each of these ghosts as an incorporeal, speaker-independent “I” (or, in the case of infinity, a notional agent that goes on counting forever). I argue that to have a mind is to have the capacity to feel. No one can be sure which organisms feel, hence have minds, but it seems likely that one-celled organisms and plants do not, (...) whereas animals do. So minds originated before humans and before language --hence, a fortiori, before writing, whether alphabetic or ideographic. (shrink)
What lies on the two sides of the linguistic divide is fairly clear: On one side, you have organisms buffeted about to varying degrees, depending on their degree of autonomy and plasticity, by the states of affairs in the world they live in. On the other side, you have organisms capable of describing and explaining the states of affairs in the world they live in. Language is what distinguishes one side from the other. How did we get here from there? (...) In principle, one can tell a seamless story about how inborn, involuntary communicative signals and voluntary instrumental praxis could have been shaped gradually, through feedback from their consequences, first into analog pantomime with communicative intent, and then into arbitrary category names combined into all powerful, truth value bearing propositions, freed from the iconic "shape" of their referents and able to tell all. (shrink)
Almost all words are the names of categories. We can learn most of our words (and hence our categories) from dictionary definitions, but not all of them. Some have to be learned from direct experience. To understand a word from its definition we need to already understand the words used in the definition. This is the “Symbol Grounding Problem” . How many words (and which ones) do we need to ground directly in sensorimotor experience in order to be able to (...) learn all other words via definition alone? The answer may shed some light both on the developmental origin of word meanings and on the evolutionary origin and adaptive value of language. We used an algorithm to reduce each of our dictionaries (Longmans LDOCE, Cambridge CIDE and WordNet) to its “grounding kernel” (“Kernel”) (which turned out to be about 10% of the dictionary) by systematically eliminating.. (shrink)
In his chapter titled "Consciousness, Charles Taylor suggests that the traditional mind/body, mental/physical dichotomy is an undesirable legacy of the seventeenth century. Its faults are that it gives rise to a dualism that must then be resolved in various unsatisfactory ways. The most prevalent of these ways is currently "functionalism," which explains cognition in terms of functional states and processes like those of a computer and "marginalizes" (i.e., minimizes or denies completely the causal role of) consciousness. The alternative, "interactionism," gives (...) due weight to consciousness but at the cost of adding an independent domain to the physical one, namely, the mental, and possibly tampering indeterminately with physics thereby. (shrink)
Research is done (mostly at universities) and funded (publicly and privately) in order to advance scientific and scholarly knowledge as well as to produce public benefits (technological and biomedical applications as well as educational and cultural ones). Research and researchers are accordingly funded not only to conduct their research, but to make their findings public, by publishing them. Their employment, salaries, careers and research funding depend on publishing their findings. This is what is often called "publish or perish.".
To appreciate what a huge difference there is between the author of a peer reviewed journal article and just about any other kind of author we need only remind ourselves why universities have their "publish or perish" policy: Aside from imparting existing knowledge to students through teaching, the work of a university scholar or scientist is devoted to creating new knowledge for other scholars and scientists to use, apply, and build upon, for the benefit of us all. Creating new knowledge (...) is called "research," and its active use and application are called "research impact." Researchers are encouraged, indeed required, to publish their findings because that is the only way to make their research accessible to and usable by other researchers. It is the only way for research to generate further research. Not publishing it means no access to it by other researchers, and no access means no impact -- in which case the research may as well not have done in the first place. (shrink)
Jerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts. Until Darwin's "tautology," it had been believed that either (a) God (...) had created all organisms as they are, or (b) organisms had always been as they are. Darwin revealed instead that (c) organisms have heritable traits that evolved across time through random variation, with survival and reproduction in (changing) environments determining (mindlessly) which variants were successfully transmitted to the next generation. This not only provided the (true) alternative (c), but also the methodology for investigating which traits had been adaptive, how and why; it also led to the discovery of the genetic mechanism of the encoding, variation and evolution of heritable traits. Fodor also draws erroneous conclusions from the analogy between Darwinian evolution and Skinnerian reinforcement learning. Fodor’s skepticism about both evolution and learning may be motivated by an overgeneralization of Chomsky’s “poverty of the stimulus argument” -- from the origin of Universal Grammar (UG) to the origin of the “concepts” underlying word meaning, which, Fodor thinks, must be “endogenous,” rather than evolved or learned. (shrink)
There are many entry points into the problem of categorization. Two particularly important ones are the so-called top-down and bottom-up approaches. Top-down approaches such as artificial intelligence begin with the symbolic names and descriptions for some categories already given; computer programs are written to manipulate the symbols. Cognitive modeling involves the further assumption that such symbol-interactions resemble the way our brains do categorization. An explicit expectation of the top-down approach is that it will eventually join with the bottom-up approach, which (...) tries to model how the hardware of the brain works: sensory systems, motor systems and neural activity in general. The assumption is that the symbolic cognitive functions will be implemented in brain function and linked to the sense organs and the organs of movement in roughly the way a program is implemented in a computer, with its links to peripheral devices such as transducers and effectors. (shrink)
Europe is losing almost 50% of the potential return on its research investment until research funders and institutions mandate that all research findings must be made freely accessible to all would be users, webwide. It is not the number of articles published that reflects the return on Europe's research investment: A piece of research, if it is worth funding and doing at all, must not only be published, but used, applied and built upon by other researchers, worldwide. This is called (...) 'research impact' and a measure of it is the number of times an article is cited by other articles ('citation impact'). (shrink)
Scholars and scientists do research to create new knowledge so that other scholars and scientists can use it to create still more new knowledge and to apply it to improving people's lives. They are paid to do research, but not to report their research: That they do for free, because it is not royalty revenue from their research papers but their "research impact" that pays their salaries, funds their further research, earns them prestige and prizes, etc.
Computationalism. According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations are -- the same ones that the brain performs in order to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware independent : Any hardware that is running the right program has the (...) right computational states. (shrink)
Certain biological facts are undeniable: Any creature born with a tendency to ignore the calls of nature -- not to eat when hungry, not to mate when horny, not to flee when in harm's way -- would not pass on that unfortunate tendency. Such a creature would instead be the first in a long line of extinct descendents. Maladaptive traits are eliminated from the gene pool by the very definition of what it means to be maladaptive.
William Gardner's (1990) proposal to establish a searchable, retrievable electronic archive is fine, as far as it goes (though he seems to have missed some of the relevant background literature, e.g. Engelbart 1975, 1984a, b; Schatz, 1985, 1987, 1991). The potential role of electronic networks in scientific publication, however, goes far beyond providing searchable electronic archives for electronic journals. The whole process of scholarly communication is currently undergoing a revolution comparable to the one occasioned by the invention of printing. On (...) the brink of intellectual perestroika is that vast PREPUBLICATION phase of scientific inquiry in which ideas and findings are discussed informally with colleagues (currently in person, by phone and by regular mail), presented more formally in seminars, conferences and symposia, and distributed still more widely in the form of preprints and tech reports that have undergone various degrees of peer review. It has now become possible to do all of this in a remarkable new way that is not only incomparably more thorough and systematic in its distribution, potentially global in scale, and almost instantaneous in speed, but so unprecedentedly interactive that it will substantially restructure the pursuit of knowledge. (shrink)
I want to report a thoroughly (perhaps surreally) modern experience I had recently. First a little context. I've always been a zealous scholarly letter writer (to the point of once being cited in print as "personal communication, pp. 14 - 20"). These days few share my epistolary penchant, which is dismissed as a doomed anachronism. Scholars don't have the time. Inquiry is racing forward much too rapidly for such genteel dawdling -- forward toward, among other things, due credit in print (...) for one's every minute effort. So I too had resigned myself to the slower turnaround but surer rewards of conventional scholarly publication. Until I came upon electronic mail: almost as rapid and direct and spontaneous as a telephone call, but with the added discipline and permanence of the written medium. I quickly became addicted, "logging on" to check my e mail at all hours of the day and night and accumulating files of intellectual exchanges with similarly inclined e epistoleans, files that rapidly approached book length. (shrink)
My purpose is to explain, first, that there is an alternative to Harnad's version of the symbol grounding problem, which is known as the problem of primitives; second, that there is an alternative to his solution (which is externalist) in the form of a dispositional conception (which is internalist); and, third, that, while the TTT, properly understood, may provide partial and fallible evidence for the presence of similar mental powers, it cannot supply conclusive proof, because more than observable symbolic manipuation (...) and robotic behavior is involved here, as he admits (Harnad 1991). Carrying the problem further appears to require inference to the best explanation. (shrink)
It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how.
The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on (...) a food-dispenser and a data-knower who already knows the correct number. The success criterion, hence the grounding, is whether the seeker’s probability of lunch is indeed increasing (hence uncertainty is decreasing) as a result of the interaction. Floridi also suggests that his philosophy of information casts some light on the problem of consciousness. I’m not so sure. (shrink)
Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you (...) really cannot ask for anything more. Or can you? Neither Turing modelling nor any other kind of computational r dynamical modelling will explain how or why cognizers feel. (shrink)
Christiansen & Chater (C&C) suggest that language is an organism, like us, and that our brains were not selected for Universal Grammar (UG) capacity; rather, languages were selected for learnability with minimal trial-and-error experience by our brains. This explanation is circular: Where did our brain's selective capacity to learn all and only UG-compliant languages come from?
Creativity may be a trait, a state or just a process defined by its products. It can be contrasted with certain cognitive activities that are not ordinarily creative, such as problem solving, deduction, induction, learning, imitation, trial and error, heuristics and "abduction," however, all of these can be done creatively too. There are four kinds of theories, attributing creativity respectively to (1) method, (2) "memory" (innate structure), (3) magic or (4) mutation. These theories variously emphasize the role of an unconscious (...) mind, innate constraints, analogy, aesthetics, anomalies, formal constraints, serendipity, mental analogs, heuristic strategies, improvisatory performance and cumulative collaboration. There is some virtue in each, but the best model is still the one implicit in Pasteur's dictum: "Chance favors the prepared mind." And because the exercise and even the definition of creativity requires constraints, it is unlikely that "creativity training" or an emphasis on freedom in education can play a productive role in this preparation. (shrink)
The ethical case for Open Access (OA) (free online access) to research findings is especially salient when it is public health that is being compromised by needless access restrictions. But the ethical imperative for OA is far more general: It applies to all scientific and scholarly research findings published in peer-reviewed journals. And peer-to-peer access is far more important than direct public access. Most research is funded so as to be conducted and published, by researchers, in order to be taken (...) up, used, and built upon in further research and applications, again by researchers (pure and applied, including practitioners), for the benefit of the public that funded it – not in order to generate revenue for the peer-reviewed journal publishing industry (nor even because there is a burning public desire to read much of it). Hence OA needs to be mandated, by researchers' institutions and funders, for all research. (shrink)
This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.