In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the (...) special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. (shrink)
Haecceitism is the thesis that, necessarily, in addition to its qualities, each thing has a haecceity or individual essence. The purpose of this paper is to expose a flaw in haecceitism: it entails that familiar cases of fission and fusion either admit of no explanation or else only admit of explanations too bizarre to warrant serious consideration. Because the explanatory problem we raise for haecceitism closely resembles the Euthyphro problem for divine command theory, we refer to our objection as the (...) haecceitic Euthyphro problem, or the Haecceitic Euthyphro for short. We conclude that the objection is decisive against haecceitism. (shrink)
Contemporary theories of universals have two things in common: first, they are unable to account for necessary connections between universals that form a structure. Second, they leave teleology out of their accounts of instantiation. These facts are not unrelated; the reason why contemporary theories have such trouble is they neglect the ancient idea that universals are ends at which nature aims. If we want a working theory of universals, however, we must return to this idea. Despite its unpopularity among realists, (...) teleology is not a disposable eccentricity, and its dismissal is not an improvement on ancient views. (shrink)
Levelt et al. describe a model of speech production in which lemma access is achieved via input from nondecompositional conceptual representations. They claim that existing decompositional theories are unable to account for lexical retrieval because of the so-called hyperonym problem. However, existing decompositional models have solved a formally equivalent problem.
Open peer commentary on the article “Constructing Constructivism” by Hugh Gash. Upshot: Gash’s retrospective analysis suggests a number of different roles for RC over the past thirty years. We outline three of these roles and then conduct a thought experiment to argue that while RC itself could be seen as a living theory that accommodates new ideas, its strongest contributions remain when it stays true to its roots and serves as a milestone along the path of educational paradigm shifts.
A universal property of visual word identification is position-invariant letter identification, such that the letter is coded in the same way in CAT and ACT. This should provide a fundamental constraint on theories of word identification, and, indeed, it inspired some of the theories that Frost has criticized. I show how the spatial coding scheme of Colin Davis can, in principle, account for contrasting transposed letter priming effects, and at the same time, position-invariant letter identification.
Two additional sources of evidence are provided in support of localist coding within connectionist networks. First, only models with localist codes can currently represent multiple pieces of information simultaneously or represent order among a set of items on-line. Second, recent priming data appear problematic for theories that rely on distributed representations. However, a faulty argument advanced by Page is also pointed out.
Open peer commentary on the article “Elementary Students’ Construction of Geometric Transformation Reasoning in a Dynamic Animation Environment” by Alan Maloney. Upshot: This commentary assumes a constructionist perspective to discuss the choice of methods, conclusions and design goals that Panorkou and Maloney make in their study of students’ activities with the Graph ’n Glyphs microworld.
The project of coordinating perception, comprehension, and motor control is an exciting one, but I found it hard to follow some of Pickering & Garrod's (P&G's) arguments as presented. Consequently, my comment is not so much a disagreement with P&G but a query about the logic of forward models: It is not clear how they are supposed to work, nor why they are needed in this (or many other) contexts, and toward that end I present an alternative idea.
According to Pylyshyn, the early visual system is able to categorize perceptual inputs into shape classes based on visual similarity criteria; it is also suggested that written words may be categorized within early vision. This speculation is contradicted by the fact that visually unrelated exemplars of a given letter (e.g., a/A) or word (e.g., read/READ) map onto common visual categories.
We argue that Bayesian models are best categorized as methodological or theoretical. That is, models are used as tools to constrain theories, with no commitment to the processes that mediate cognition, or models are intended to approximate the underlying algorithmic solutions. We argue that both approaches are flawed, and that the Enlightened Bayesian approach is unlikely to help.