Open peer commentary on the target article “How and Why the Brain Lays the Foundations for a Conscious Self” by Martin V. Butz. Excerpt: In this commentary to Martin V. Butz’s target article I am especially concerned with his remarks about language (§33, §§71–79, §91) and modularity (§32, §41, §48, §81, §§94–98). In that context, I would like to bring into discussion my own work on computational models of self-monitoring (cf. Neumann 1998, 2004). In this work I explore the (...) idea of an anticipatory drive as a substantial control device for modelling high-level complex language processes such as selfmonitoring and adaptive language use. My work is grounded in computationallinguistics and, as such, uses a mathematical and computational methodology. Nevertheless, it might provide some interesting aspects and perspectives for constructivism in general, and the model proposed in Butz’s article, in particular. (shrink)
The recent trend in cognitive robotics experiments on language learning, symbol grounding, and related issues necessarily entails a reduction of sensorimotor aspects from those provided by a human body to those that can be realized in machines, limiting robotic models of symbol grounding in this respect. Here, we argue that there is a need for modeling work in this domain to explicitly take into account the richer human embodiment even for concrete concepts that prima facie relate merely to simple actions, (...) and illustrate this using distributional methods from computationallinguistics which allow us to investigate grounding of concepts based on their actual usage. We also argue that these techniques have applications in theories and models of grounding, particularly in machine implementations thereof. Similarly, considering the grounding of concepts in human terms may be of benefit to future work in computationallinguistics, in particular in going beyond “grounding” concepts in the textual modality alone. Overall, we highlight the overall potential for a mutually beneficial relationship between the two fields. (shrink)
This paper reports a procedure which I employed with two computational research instruments, the Index Thomisticus and its companion St. Thomas CD-ROM, in order to research the Thomistic axiom, ‘whatever is received is received according to the mode of the receiver.’ My procedure extends to the lexicological methods developed by the pioneering creator of the Index, Roberto Busa, from single terms to a proposition. More importantly, the paper shows how the emerging results of the lexicological searches guided my formation (...) of a philosophical thesis about the axiom’s import for Aquinas’s existential metaphysics. (shrink)
This book constitutes the thoroughly refereed post-proceedings of the Third International Conference on Logical Aspects of ComputationalLinguistics, LACL'98, held in Grenoble, France, in December 1998. The 15 revised full papers presented together with one invited paper were carefully reviewed and selected during two rounds of refereeing from 33 submissions and 19 conference presentations. Among the topics covered are various types of grammars, categorical inference, automated reasoning, constraint handling, logical forms, dialogue semantics, unification, and proofs.
Human rights discourse has been likened to a global lingua franca, and in more ways than one, the analogy seems apt. Human rights discourse is a language that is used by all yet belongs uniquely to no particular place. It crosses not only the borders between nation-states, but also the divide between national law and international law: it appears in national constitutions and international treaties alike. But is it possible to conceive of human rights as a global language or lingua (...) franca not just in a figurative or metaphorical sense, but in a literal or linguistic sense as a legal dialect defined by distinctive patterns of word choice and usage? Does there exist a global language of human rights that transcends not only national borders, but also the divide between domestic and international law? Empirical analysis suggests that the answer is yes, but this global language comes in at least two variants or dialects. New techniques for performing automated content analysis enable us to analyze the bulk of all national constitutions over the last two centuries, together with the world’s leading regional and international human rights instruments, for patterns of linguistic similarity and to evaluate how much language, if any, they share in common. Specifically, we employ a technique known as topic modeling that disassembles texts into recurring verbal patterns. The results highlight the existence of two species or dialects of rights talk—the universalist dialect and the positive-rights dialect—both of which are global in reach and rising in popularity. The universalist dialect is generic in content and draws heavily on the type of language found in international and regional human rights instruments. It appears in particularly large doses in the constitutions of transitional states, developing states, and states that have been heavily exposed to the influence of the international community. The positive-rights dialect, by contrast, is characterized by its substantive emphasis on positive rights of a social or economic variety, and by its prevalence in lengthier constitutions and constitutions from outside the common law world, especially those of the Spanish-speaking world. Both dialects of rights talk are truly transnational, in the sense that they appear simultaneously in national, regional, and international legal instruments and transcend the distinction between domestic and international law. Their existence attests to the blurring of the boundary between constitutional law and international law. (shrink)
This book constitutes the thoroughly refereed post-proceedings of the Second International Conference on Logical Aspects of ComputationalLinguistics, LACL '97, held in Nancy, France in September 1997. The 10 revised full papers presented were carefully selected during two rounds of reviewing. Also included are two comprehensive invited papers. Among the topics covered are type theory, various types of grammars, linear logic, parsing, type-directed natural language processing, proof-theoretic aspects, concatenation logics, and mathematical languages.
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts (...) of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. (shrink)
The article presents proofs of the context freeness of a family of typelogical grammars, namely all grammars that are based on a uni- ormultimodal logic of pure residuation, possibly enriched with thestructural rules of Permutation and Expansion for binary modes.
There is currently much interest in bringing together the tradition of categorial grammar, and especially the Lambek calculus, with the recent paradigm of linear logic to which it has strong ties. One active research area is designing non-commutative versions of linear logic (Abrusci, 1995; Retoré, 1993) which can be sensitive to word order while retaining the hypothetical reasoning capabilities of standard (commutative) linear logic (Dalrymple et al., 1995). Some connections between the Lambek calculus and computations in groups have long been (...) known (van Benthem, 1986) but no serious attempt has been made to base a theory of linguistic processing solely on group structure. This paper presents such a model, and demonstrates the connection between linguistic processing and the classical algebraic notions of non-commutative free group, conjugacy, and group presentations. A grammar in this model, or G-grammar is a collection of lexical expressions which are products of logical forms, phonological forms, and inverses of those. Phrasal descriptions are obtained by forming products of lexical expressions and by cancelling contiguous elements which are inverses of each other. A G-grammar provides a symmetrical specification of the relation between a logical form and a phonological string that is neutral between parsing and generation modes. We show how the G-grammar can be oriented for each of the modes by reformulating the lexical expressions as rewriting rules adapted to parsing or generation, which then have strong decidability properties (inherent reversibility). We give examples showing the value of conjugacy for handling long-distance movement and quantifier scoping both in parsing and generation. The paper argues that by moving from the free monoid over a vocabulary V (standard in formal language theory) to the free group over V, deep affinities between linguistic phenomena and classical algebra come to the surface, and that the consequences of tapping the mathematical connections thus established can be considerable. (shrink)
Some recent studies in computationallinguistics have aimed to take advantage of various cues presented by punctuation marks. This short survey is intended to summarise these research efforts and additionally, to outline a current perspective for the usage and functions of punctuation marks. We conclude by presenting an information-based framework for punctuation, influenced by treatments of several related phenomena in computationallinguistics.
: The open-ended character of natural languages calls for the hypothesis that humans are endowed with a recursive procedure generating sentences which are hierarchically organized. Structural relations such as c-command, expressed on hierarchical sentential representations, determine all sorts of formal and interpretive properties of sentences. The relevant computational principles are well beyond the reach of conscious introspection, so that studying such properties requires the formulation of precise formal hypotheses, and empirically testing them. This article illustrates all these aspects of (...) linguistic research through the discussion of non-coreference effects. The article argues in favor of the formal linguistic approach based on hierarchical structures, and against alternatives based on vague notions of “analogical generalization”, and/or exploiting mere linear order. In the final part, the issue of cross-linguistic invariance and variation of non-coreference effects is addressed. Keywords: Linguistic Knowledge; Morphosyntactic Properties; Unconscious Computations; Coreference; Linguistic Representations Conoscenza linguistica e computazioni inconsce Riassunto: Il carattere aperto del linguaggio naturale avvalora l’ipotesi che gli esseri umani siano dotati di una procedura ricorsiva che genera frasi gerarchicamente organizzate. Relazioni strutturali come il c-comando, espresse su rappresentazioni frasali gerarchiche, determinano tutte le proprietà formali e interpretative delle frasi. I principi computazionali rilevanti sono totalmente al di fuori della portata della coscienza introspettiva e così lo studio di tali proprietà richiede la formulazione di precise ipotesi formali e la loro verifica sperimentale. Questo articolo illustra tutti questi aspetti della ricerca linguistica, esaminando gli effetti di non-coreferenza. Si argomenta in favore dell’approccio linguistico formale basato su strutture gerarchiche e contro alternative basate su vaghe nozioni di “generalizzazione analogica” e/o che impiegano il semplice ordine lineare. Nella parte finale si affronta il tema dell’invarianza e della variazione cross-linguistica degli effetti di non-coreferenza. Parole chiave: Conoscenza linguistica; Proprietà morfosintattiche; Computazioni inconsce; Coreferenzialità; Rappresentazioni linguistiche. (shrink)
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures—ambiguity and distinctiveness—derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with (...) human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. (shrink)
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference In narrative is addressed: references in passages told from the perspective of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented.
We combine state-of-the-art techniques from computational linguisticsand theorem proving to build an engine for playing text adventures,computer games with which the player interacts purely through naturallanguage. The system employs a parser for dependency grammar and ageneration system based on TAG, and has components for resolving andgenerating referring expressions. Most of these modules make heavy useof inferences offered by a modern theorem prover for descriptionlogic. Our game engine solves some problems inherent in classical textadventures, and is an interesting test case (...) for the interactionbetween natural language processing and inference. (shrink)
This paper presents a study of the effect of working memory load on the interpretation of pronouns in different discourse contexts: stories with and without a topic shift. We discuss a computational model (in ACT-R, Anderson, 2007) to explain how referring expressions are acquired and used. On the basis of simulations of this model, it is predicted that WM constraints only affect adults' pronoun resolution in stories with a topic shift, but not in stories without a topic shift. This (...) latter prediction was tested in an experiment. The results of this experiment confirm that WM load reduces adults' sensitivity to discourse cues signaling a topic shift, thus influencing their interpretation of subsequent pronouns. (shrink)
This paper presents a study of the effect of working memory load on the interpretation of pronouns in different discourse contexts: stories with and without a topic shift. We discuss a computational model (in ACT‐R, Anderson, 2007) to explain how referring expressions are acquired and used. On the basis of simulations of this model, it is predicted that WM constraints only affect adults' pronoun resolution in stories with a topic shift, but not in stories without a topic shift. This (...) latter prediction was tested in an experiment. The results of this experiment confirm that WM load reduces adults' sensitivity to discourse cues signaling a topic shift, thus influencing their interpretation of subsequent pronouns. (shrink)
Extract from Hofstadter's revew in Bulletin of American Mathematical Society : http://www.ams.org/journals/bull/1980-02-02/S0273-0979-1980-14752-7/S0273-0979-1980-14752-7.pdf -/- "Aaron Sloman is a man who is convinced that most philosophers and many other students of mind are in dire need of being convinced that there has been a revolution in that field happening right under their noses, and that they had better quickly inform themselves. The revolution is called "Artificial Intelligence" (Al)-and Sloman attempts to impart to others the "enlighten- ment" which he clearly regrets not having (...) experienced earlier himself. Being somewhat of a convert, Sloman is a zealous campaigner for his point of view. Now a Reader in Cognitive Science at Sussex, he began his academic career in more orthodox philosophy and, by exposure to linguistics and AI, came to feel that all approaches to mind which ignore AI are missing the boat. I agree with him, and I am glad that he has written this provocative book. The tone of Sloman's book can be gotten across by this quotation (p. 5): "I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incom- petence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory." -/- (The author now regrets the extreme polemical tone of the book.). (shrink)
The explosion of data grows at a rate of roughly five trillion bits a second, giving rise to greater urgency in conceptualizing the infosphere and understanding its implications for knowledge and public policy. Philosophers of technology and information technologists alike who wrestle with ontological and epistemological questions of digital information tend to emphasize, as Floridi does, information as our new ecosystem and human beings as interconnected informational organisms, inforgs at home in ambient intelligence. But the linguistic and conceptual representations of (...) Big Data—the massive volume of both structured and unstructured data—and the real world practice of data-mining for patterns and meaningful interpretation of evidence reveal tension and ambiguity in the bold promise of data analytics. This paper explores the tacit epistemology of the rhetoric and representation of Big Data and suggests a richer account of its ambiguities and the paradox of its real world materiality. We argue that Big Data should be recognized as manifesting multiple and conflicting trajectories that reflect human intentionality and particular patterns of power and authority. Such patterns require attentive exploration and moral appraisal if we are to resist simplistic informationist ontologies of Big Data, and the subtle forms of control in the political ecology of Big Data that undermine its promise as transformational knowledge. (shrink)
The notions of argument and argumentation have become increasingly ubiquitous in Artificial Intelligence research, with various application and interpretations. Less attention has been, however, specifically devoted to rhetorical argument The work presented in this paper aims at bridging this gap, by proposing a framework for characterising rhetorical argumentation, based on Perelman and Olbrechts-Tyteca's New Rhetoric. The paper provides an overview of the state of the art of computational work based on, or dealing with, rhetorical aspects of argumentation, before presenting (...) the characterisation proposed, corroborated by walked-through examples. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to (...) investigate semantic distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
This book provides a sustained and penetrating critique of a wide range of views in modern cognitive science and philosophy of the mind, from Turing's famous test for intelligence in machines to recent work in computational linguistic theory. While discussing many of the key arguments and topics, the authors also develop a distinctive analytic approach. Drawing on the methods of conceptual analysis first elaborated by Wittgenstein and Ryle, the authors seek to show that these methods still have a great (...) deal to offer in the field of the cognitive theory and the philosophy of mind, providing a powerful alternative to many of the positions put forward in the contemporary literature. Amoung the many issues discussed in the book are the following: the Cartesian roots of modern conceptions of mind; Searle's 'Chinese Room' thought experiment; Fodor's 'language of thought' hypothesis; the place of 'folk psychology' in cognitivist thought; and the question of whether any machine may be said to 'think' or 'understand' in the ordinary senses of these words. Wide ranging, up-to-date and forcefully argued, this book represents a major intervention in contemporary debates about the status of cognitive science an the nature of mind. It will be of particular interest to students and scholars in philosophy, psychology, linguistics and computing sciences. (shrink)
Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks in online language (...) processing, we explore the hypothesis that children rely more heavily on multiword units in language learning than do adults learning a second language. To this end, we take an initial step toward using large-scale, corpus-based computational modeling as a tool for exploring the granularity of speakers' linguistic units. Employing a computational model of language learning, the Chunk-Based Learner, we compare the usefulness of chunk-based knowledge in accounting for the speech of second-language learners versus children and adults speaking their first language. Our findings suggest that while multiword units are likely to play a role in second-language learning, adults may learn less useful chunks, rely on them to a lesser extent, and arrive at them through different means than children learning a first language. (shrink)
Inspired by the success of generative linguistics and transformational grammar, proponents of the linguistic analogy (LA) in moral psychology hypothesize that careful attention to folk-moral judgments is likely to reveal a small set of implicit rules and structures responsible for the ubiquitous and apparently unbounded capacity for making moral judgments. As a theoretical hypothesis, LA thus requires a rich description of the computational structures that underlie mature moral judgments, an account of the acquisition and development of these structures, (...) and an analysis of those components of the moral system that are uniquely human and uniquely moral. In this paper we present the theoretical motivations for adopting LA in the study of moral cognition: (a) the distinction between competence and performance, (b) poverty of stimulus considerations, and (c) adopting the computational level as the proper level of analysis for the empirical study of moral judgment. With these motivations in hand, we review recent empirical findings that have been inspired by LA and which provide evidence for at least two predictions of LA: (a) the computational processes responsible for folk-moral judgment operate over structured representations of actions and events, as well as coding for features of agency and outcomes; and (b) folk-moral judgments are the output of a dedicated moral faculty and are largely immune to the effects of context. In addition, we highlight the complexity of the interfaces between the moral faculty and other cognitive systems external to it (e.g., number systems). We conclude by reviewing the potential utility of the theoretical and empirical tools of LA for future research in moral psychology. (shrink)
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning (...) with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computationallinguistics, artificial intelligence and the philosophy of language will want to read this book. (shrink)
This article uses a 36-million word corpus of news reporting on Hurricane Katrina in the United States to explore how computer-based methods can help researchers to investigate the construction of newsworthiness. It makes use of Bednarek and Caple’s discursive approach to the analysis of news values, and is both exploratory and evaluative in nature. One aim is to test and evaluate the integration of corpus techniques in applying discursive news values analysis. We employ and evaluate corpus techniques that have not (...) been tested previously in relation to the large-scale analysis of news values. These techniques include tagged lemma frequencies, collocation, key part-of-speech tags and key semantic tags. A secondary aim is to gain insights into how a specific happening – Hurricane Katrina – was linguistically constructed as newsworthy in major American news media outlets, thus also making a contribution to ecolinguistics. (shrink)
We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the (...) definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing “levels” approaches to explanation, and to other optimality-based modeling approaches. (shrink)
We compare our model of unsupervised learning of linguistic structures, ADIOS [1, 2, 3], to some recent work in computationallinguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent afﬁnity with Mildly Context Sensitive Languages). The representations learned (...) by our algorithm are truly emergent from the (unannotated) corpus data, whereas those found in published works on cognitive and construction grammars and on TAGs are hand-tailored. Thus, our results complement and extend both the computational and the more linguistically oriented research into language acquisition. We conclude by suggesting how empirical and formal study of language can be best integrated. (shrink)
Miscommunication phenomena such as repair in dialogue are important indicators of the quality of communication. Automatic detection is therefore a key step toward tools that can characterize communication quality and thus help in applications from call center management to mental health monitoring. However, most existing computational linguistic approaches to these phenomena are unsuitable for general use in this way, and particularly for analyzing human–human dialogue: Although models of other-repair are common in human-computer dialogue systems, they tend to focus on (...) specific phenomena, missing the range of repair and repair initiation forms used by humans; and while self-repair models for speech recognition and understanding are advanced, they tend to focus on removal of “disfluent” material important for full understanding of the discourse contribution, and/or rely on domain-specific knowledge. We explain the requirements for more satisfactory models, including incrementality of processing and robustness to sparsity. We then describe models for self- and other-repair detection that meet these requirements and investigate how they perform on datasets from a range of dialogue genres and domains, with promising results. (shrink)
Our book Relevance (Sperber and Wilson 1986) treats utterance interpretation as a two-phase process: a modular decoding phase is seen as providing input to a central inferential phase in which a linguistically encoded logical form is contextually enriched and used to construct a hypothesis about the speaker's informative intention. Relevance was mainly concerned with the inferential phase of comprehension: we had to answer Fodor's challenge that while decoding processes are quite well understood, inferential processes are not only not understood, but (...) perhaps not even understandable (see Fodor 1983). Here we will look more closely at the decoding phase and consider what types of information may be linguistically encoded, and how the borderline between decoding and inference can be drawn. It might be that all linguistically encoded information is cut to a single pattern: all truth conditions, say, or all instructions for use. However, there is a robust intuition that two basic types of meaning can be found. This intuition surfaces in a variety of distinctions: between describing and indicating, stating and showing, saying and conventionally implicating, or between truth-conditional and non-truth-conditional, conceptual and procedural, or representational and computational meaning. In the literature, justifications for these distinctions have been developed in both strictly linguistic and more broadly cognitive terms. The linguistic justification goes as follows (see for example Recanati 1987). Utterances express propositions; propositions have truth conditions; but the meaning of an utterance is not exhausted by its truth conditions, i.e. the truth conditions of the proposition expressed. An utterance not only expresses a proposition but is used to perform a variety of speech acts. It can.. (shrink)
The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track (...) and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework. (shrink)
This article explores the link between CEOs’ language and hubristic leadership. It is based on the precepts that leaders’ linguistic utterances provide insights into their personality and behaviours; hubris is associated with unethical and potentially destructive leadership behaviours; if it is possible to identify linguistic markers of CEO hubris then these could serve as early warnings sign and help to mitigate the associated risks. Using computationallinguistics, we analysed spoken utterances from a sample of hubristic CEOs and compared (...) them with non-hubristic CEOs. We found that hubristic CEOs’ linguistic utterances show systematic and consistent differences from the linguistic utterances of non-hubristic CEOs. Demonstrating how hubristic leadership manifests in CEO language contributes to wider research regarding the diagnosis and prevention of the unethical and potentially destructive effects of hubristic leadership. This research contributes to the wider study of hubris and unethical leadership by applying a novel method for identifying linguistic markers and offers a way of militating against the risk of unethical and destructive CEO behaviours induced or aggravated by hubristic leadership. (shrink)