In recent years, Reichenbach’s 1920 conception of the principles of coordination has attracted increased attention after Michael Friedman’s attempt to revive Reichenbach’s idea of a “relativized a priori”. This paper follows the origin and development of this idea in the framework of Reichenbach’s distinction between the axioms of coordination and the axioms of connection. It suggests a further differentiation among the coordinating axioms and accordingly proposes a different account of Reichenbach’s “relativized a priori”.
Philosophers using game-theoretical models of human interactions have, I argue, often overestimated what sheer rationality can achieve. (References are made to David Gauthier, David Lewis, and others.) In particular I argue that in coordination problems rational agents will not necessarily reach a unique outcome that is most preferred by all, nor a unique 'coordination equilibrium' (Lewis), nor a unique Nash equilibrium. Nor are things helped by the addition of a successful precedent, or by common knowledge of generally accepted (...) personal principles. Commitments like those generated by agreements may be necessary for rational expectations to arise. Social conventions, construed as group principles (following the analysis in my book On Social Facts), would suffice for this task. (shrink)
We re-examine the relationship between coordination, legal sanctions, and free-riding in light of the recent controversy regarding the applicability of the coordination problem paradigm of law-making. We argue that legal sanctions can help solve coordination problems by eliminating socially suboptimal equilibrium outcomes. Once coordination has taken place, however, free-riding can not lead to the breakdown of coordination outcomes, even if sanctions may still be effective at increasing the equity of such outcomes. Finally, we argue that (...) it is the choice of a legal or constitutional system rather than the choice of law that is paradigmatic of the coordination problem. This view requires a re-assessment of the normative status of sanctions attached to individual laws. (shrink)
Frege's picture of attitude states and attitude reports requires a notion of content that is shareable between agents, yet more fine-grained than reference. Kripke challenged this picture by giving a case on which the expressions that resist substitution in an attitude report share a candidate notion of fine-grained content. A consensus view developed which accepted Kripke's general moral and replaced the Fregean picture with an account of attitude reporting on which states are distinguished in conversation by their (private) representational properties. (...) I begin in support of the consensus by showing how a sort of de facto coordination on mental symbols is possible, even for unsophisticated agents. But I go on to argue that whenever conditions are ripe for de facto coordination on symbols, there is an inter-subjective relation that supports a fine-grained notion of content resistant to Kripke's challenge. The consensus view corresponds to a Kripke-resistant strain of the Fregean picture. (shrink)
This paper argues that multiple coordinations like tall, thin and happy are interpreted in a “ﬂat” iterative process, but using “nested” recursive application of binary coordination operators in the compositional meaning derivation. Ample motivation for ﬂat interpretation is shown by contrasting such coordinations with nested, syntactically ambiguous, coordinate structures like tall and thin and happy. However, new evidence coming from type shifting and predicate distribution with verb phrases show motivation for an independent hierarchical ingredient in the compositional semantics of (...) multiple coordination with no parallel hierarchy in the syntax. This establishes a contrast between operations at the syntax-semantics interface and compositional semantic mechanisms. At the same time, such evidence motivate the treatment of operations like type shifting and distributivity as purely semantic. (shrink)
How do rational agents coordinate in a single-stage, noncooperative game? Common knowledge of the payoff matrix and of each player's utility maximization among his strategies does not suffice. This paper argues that utility maximization among intentions and then acts generates coordination yielding a payoff-dominant Nash equilibrium. ‡I thank the audience at my paper's presentation at the 2006 PSA meeting for many insightful points. †To contact the author, please write to: Philosophy Department, University of Missouri, Columbia, MO 65211; e-mail: WeirichP@missouri.edu.
This article begins by pointing out the difficulties involved by the insertion of freedom in economics: It poses epistemological problems that are not satisfactorily solved by the standard theories. The article suggests that the Aristotelian epistemological frame of practical rationality may be an apt position from which one can deal with freedom in economics. Aristotle's concepts of society and economics are first introduced. The role of virtues in achieving economic coordination is exposed. Then the corresponding concept of practical science (...) is described, showing its main characteristics and how they fit in with traditional political economy. The concept of value neutrality receives special attention in the article: A reinterpretation of the meaning of it is proposed. The article concludes that Aristotle's broad concepts of practical reason and science leave room for a more comprehensive notion of economics. (shrink)
Why is interaction so simple? This article presents a theory of interaction based on the use of shared representations as “coordination tools” (e.g., roundabouts that facilitate coordination of drivers). By aligning their representations (intentionally or unintentionally), interacting agents help one another to solve interaction problems in that they remain predictable, and offer cues for action selection and goal monitoring. We illustrate how this strategy works in a joint task (building together a tower of bricks) and discuss its requirements (...) from a computational viewpoint. (shrink)
In her book Rationality and coordination (Cambridge University Press, 1994) Cristina Bicchieri brings together (and adds to) her own contributions to game theory and the philosophy of economics published in various journals in the period 1987-1992. The book, however, is not a collection of separate articles but rather a homogeneous unit organized around some central themes in the foundations of non-cooperative game theory. Bicchieri’s exposition is admirably clear and well organized. Somebody with a good knowledge of game theory would (...) probably benefit mainly from reading the second part of Chapter 3 (from Section 3.6 onward) and Chapter 4. On the other hand, those who have had little exposure to game theory, would certainly benefit from reading the entire book. I shall begin with an overview of the content of the book and then offer some critical comments on what I consider to be the most important part of it. (shrink)
This comment makes four related points. First, explaining coordination is different from explaining cooperation. Second, solving the coordination problem is more important for the theory of games than solving the cooperation problem. Third, a version of the Principle of Coordination can be rationalized on individualistic grounds. Finally, psychological game theory should consider how players perceive their gaming situation.
The concept of locally specialized functions dominates research on higher brain function and its disorders. Locally specialized functions must be complemented by processes that coordinate those functions, however, and impairment of coordinating processes may be central to some psychotic conditions. Evidence for processes that coordinate activity is provided by neurobiological and psychological studies of contextual disambiguation and dynamic grouping. Mechanisms by which this important class of cognitive functions could be achieved include those long-range connections within and between cortical regions that (...) activate synaptic channels via NMDA-receptors, and which control gain through their voltage-dependent mode of operation. An impairment of these mechanisms is central to PCP-psychosis, and the cognitive capabilities that they could provide are impaired in some forms of schizophrenia. We conclude that impaired cognitive coordination due to reduced ion flow through NMDA-channels is involved in schizophrenia, and we suggest that it may also be involved in other disorders. This perspective suggests several ways in which further research could enhance our understanding of cognitive coordination, its neural basis, and its relevance to psychopathology. Key Words: attention; cerebral cortex; cognitive coordination; cognitive neuropsychiatry; cognitive neuropsychology; context disorganization; Gamma rhythms; Gestalt theory; glutamate; grouping; memory; NMDA-receptors; PCP-psychosis; perceptual organization; schizophrenia. (shrink)
Adaptationists explain the evolution of religion from the cooperative effects of religious commitments, but which cooperation problem does religion evolve to solve? I focus on a class of symmetrical coordination problems for which there are two pure Nash equilibriums: (1) ALL COOPERATE, which is efficient but relies on full cooperation; (2) ALL DEFECT, which is inefficient but pays regardless of what others choose. Formal and experimental studies reveal that for such risky coordination problems, only the defection equilibrium is (...) evolutionarily stable. The following makes sense of otherwise puzzling properties of religious cognition and cultures as features of cooperative designs that evolve to stabilise such risky exchange. The model is interesting because it explains lingering puzzles in the data on religion, and better integrates evolutionary theories of religion with recent, well-motivated models of cooperative niche construction. (shrink)
It is widely appreciated that establishment and maintenance of coordination are among the key evolutionary promoters and stabilizers of human language. In consequence, it is also generally recognized that game theory is an important tool for studying these phenomena. However, the best known game theoretic applications to date tend to assimilate linguistic communication with signaling. The individualistic philosophical bias in Western social ontology makes signaling seem more challenging than it really is, and thus focuses attention on theoretical problems - (...) for example, coordination on lexical meaning - that actual evolution did not need to solve by improving humans' strategic or social intelligence relative to the endowments of other primates. At the same time, issues of genuine evolutionary significance related to language, especially those around the tensions between individual and collective agency, and around intergenerational accumulation of knowledge, are obscured. This in turn leads to underestimation of the potential contribution that game theory can make to enlightening models of the evolution of human language. JEL classification: A11, A12, B52, C73, D02, D03, D82, Z13. (shrink)
Game theory's paradoxes stimulate the study of rationality. Sometimes they motivate the revising of standard principles of rationality. Other times they call for revising applications of those principles or introducing supplementary principles of rationality. I maintain that rationality adjusts its demands to circumstances, and in ideal games of coordination it yields a payoff-dominant equilibrium.
The type of principles which cognitive engineers need to design better work environments are principles which explain interactivity and distributed cognition: how human agents interact with themselves and others, their work spaces, and the resources and constraints that populate those spaces. A first step in developing these principles is to clarify the fundamental concepts of environment, coordination, and behavioural function. Using simple examples, I review changes the distributed perspective forces on these basic notions.
Human social coordination is often mediated by language. Through verbal dialogue, people direct each other's attention to properties of their shared environment, they discuss how to jointly solve problems, share their introspections, and distribute roles and assignments. In this article, we propose a dynamical framework for the study of the coordinative role of language. Based on a review of a number of recent experimental studies, we argue that shared symbolic patterns emerge and stabilize through a process of local reciprocal (...) linguistic alignment. Such patterns in turn come to facilitate and refine social coordination by enabling the alignment, joint construction and navigation of conceptual models and actions. Implications of the framework are illustrated and discussed in relation to a case study where dyads of interlocutors interact verbally to reach joint decisions in a perceptual discrimination task. Keywords: social coordination; language; communication; linguistic alignment; symbolic patterns; affordances; emergence; evolution; adaptivity; interaction. (shrink)
The traditional solution concept for noncooperative game theory is the Nash equilibrium, which contains an implicit assumption that playersâ probability distributions satisfy t probabilistic independence. However, in games with more than two players, relaxing this assumption results in a more general equilibrium concept based on joint beliefs (Vanderschraaf, 1995). This article explores the implications of this joint-beliefs equilibrium concept for two kinds of conflictual coordination games: crisis bargaining and public goods provision. We find that, using updating consistent with Bayesâ (...) rule, playersâ beliefs converge to equilibria in joint beliefs which do not satisfy probabilistic independence. In addition, joint beliefs greatly expand the set of mixed equilibria. On the face of it, allowing for joint beliefs might be expected to increase the prospects for coordination. However, we show that if players use joint beliefs, which may be more likely as the number of players increases, then the prospects for coordination in these games declines vis-Ã -vis independent beliefs. (shrink)
Prior research suggests that the action system is responsible for creating an immediate sense of self by determining whether certain sensations and perceptions are the result of one's own actions. In addition, it is assumed that declarative, episodic, or autobiographical memories create a temporally extended sense of self or some form of identity. In the present article, we review recent evidence suggesting that action (procedural) knowledge also forms part of a person's identity, an action identity, so to speak. Experiments that (...) addressed self-recognition of past actions, prediction, and coordination provide ample evidence for this assumption. The phenomena observed in these experiments can be explained by the assumption that observing an action results in the activation of action representations, the more so, when the action observed corresponds to the way in which the observer would produce it. (shrink)
Following Schelling (1960), coordination problems have mainly been considered in a context where agents can achieve a common goal (e.g., rendezvous) only by taking common actions. Dynamic versions of this problem have been studied by Crawford and Haller (1990), Ponssard (1994), and Kramarz (1996). This paper considers an alternative dynamic formulation in which the common goal (dispersion) can only be achieved by agents taking distinct actions. The goal of spatial dispersion has been studied in static models of habitat selection, (...) location or congestion games, and network analysis. Our results show how this goal can be achieved gradually, by indistinguishable non-communicating agents, in a dynamic setting. (shrink)
Information processing theories of memory and skills can be reformulated in terms of how categories are physically and temporally related, a process called conceptual coordination. Dreaming can then be understood as a story-understanding process in which two mechanisms found in everyday comprehension are missing: conceiving sequences (chunking categories in time as a higher-order categorization) and coordinating across modalities (e.g., relating the sound of a word and the image of its meaning). On this basis, we can readily identify isomorphisms between (...) dream phenomenology and neurophysiology, and explain the function of dreaming as facilitating future coordination of sequential, cross-modal categorization (i.e., REM sleep lowers activation thresholds, “unlearning”). [Hobson et al.; Nielsen; Solms; Revonsuo; Vertes & Eastman]. (shrink)
The paper presents a variation of the EMAIL Game, originally proposed byRubinstein (American Economic Review, 1989), in which coordination ofthe more rewarding-risky joint course of actions is shown to obtain, evenwhen the relevant game is, at most, ``mutual knowledge.'' In the exampleproposed, a mediator is introduced in such a way that two individualsare symmetrically informed, rather than asymmetrically as in Rubinstein,about the game chosen by nature. As long as the message failure probabilityis sufficiently low, with the upper bound being (...) a function of the gamepayoffs, conditional beliefs in the opponent's actions can allow playersto choose a more rewarding-risky action. The result suggests that, forefficient coordination to obtain, the length of interactive knowledge onthe game, possibly up to ``almost common knowledge,'' does not seem to bea major conceptual issue and that emphasis should be focused instead onthe communication protocol and an appropriate relationship between thereliability of communication channels and the payoffs at stake. (shrink)
This book attempts to marry truth-conditional semantics with cognitive linguistics in the church of computational neuroscience. To this end, it examines the truth-conditional meanings of coordinators, quantifiers, and collective predicates as neurophysiological phenomena that are amenable to a neurocomputational analysis. Drawing inspiration from work on visual processing, and especially the simple/complex cell distinction in early vision (V1), we claim that a similar two-layer architecture is sufficient to learn the truth-conditional meanings of the logical coordinators and logical quantifiers. As a prerequisite, (...) much discussion is given over to what a neurologically plausible representation of the meanings of these items would look like. We eventually settle on a representation in terms of correlation, so that, for instance, the semantic input to the universal operators (e.g. and, all)is represented as maximally correlated, while the semantic input to the universal negative operators (e.g. nor, no)is represented as maximally anticorrelated. On the basis this representation, the hypothesis can be offered that the function of the logical operators is to extract an invariant feature from natural situations, that of degree of correlation between parts of the situation. This result sets up an elegant formal analogy to recent models of visual processing, which argue that the function of early vision is to reduce the redundancy inherent in natural images. Computational simulations are designed in which the logical operators are learned by associating their phonological form with some degree of correlation in the inputs, so that the overall function of the system is as a simple kind of pattern recognition. Several learning rules are assayed, especially those of the Hebbian sort, which are the ones with the most neurological support. Learning vector quantization (LVQ) is shown to be a perspicuous and efficient means of learning the patterns that are of interest. We draw a formal parallelism between the initial, competitive layer of LVQ and the simple cell layer in V1, and between the final, linear layer of LVQ and the complex cell layer in V1, in that the initial layers are both selective, while the final layers both generalize. It is also shown how the representations argued for can be used to draw the traditionally-recognized inferences arising from coordination and quantification, and why the inference of subalternacy breaks down for collective predicates. Finally, the analogies between early vision and the logical operators allow us to advance the claim of cognitive linguistics that language is not processed by proprietary algorithms, but rather by algorithms that are general to the entire brain. Thus in the debate between objectivist and experiential metaphysics, this book falls squarely into the camp of the latter. Yet it does so by means of a rigorous formal, mathematical, and neurological exposition – in contradiction of the experiential claim that formal analysis has no place in the understanding of cognition. To make our own counter-claim as explicit as possible, we present a sketch of the LVQ structure in terms of mereotopology, in which the initial layer of the network performs topological operations, while the final layer performs mereological operations. The book is meant to be self-contained, in the sense that it does not assume any prior knowledge of any of the many areas that are touched upon. It therefore contains mini-summaries of biological visual processing, especially the retinocortical and ventral /what?/ parvocellular pathways computational models of neural signaling, and in particular the reduction of the Hodgkin-Huxley equations to the connectionist and integrate-and-fire neurons Hebbian learning rules and the elaboration of learning vector quantization the linguistic pathway in the left hemisphere memory and the hippocampus truth-conditional vs. image-schematic semantics objectivist vs. experiential metaphysics and mereotopology. All of the simulations are implemented in MATLAB, and the code is available from the book’s website. • The discovery of several algorithmic similarities between visison and semantics. • The support of all of this by means of simulations, and the packaging of all of this in a coherent theoretical framework. (shrink)
Schizophrenics exhibit a deficit in theory of mind (ToM), but an intact theory of biology (ToB). One explanation is that ToM relies on an independent module that is selectively damaged. Phillips & Silverstein's analyses suggest an alternative: ToM requires the type of coordination that is impaired in schizophrenia, whereas ToB is spared because this type of coordination is not involved.
Context, connection, and coordination (CCC) describe well where the problems that apply to thought-disordered patients with schizophrenia lie. But they may be part of the experience of those with other symptom constellations. Switching is an important mechanism to allow context to be applied appropriately to changing circumstances. In some cases, NMDA-voltage modulations may be central, but gain and shift are also functions that monoaminergic systems express in CCC.
Studies of aging and autism as outlined by Bertone, Mottron, & Faubert (Bertone et al.) and by Faubert & Bertone suggest that disorders of cognitive coordination involving impairments of dynamic gestalt grouping and context-sensitivity may be common to several different disorders. We agree that such studies may shed light on these processes and their neuronal bases. However, we also emphasize that dynamic grouping and context-sensitivity can fail in various ways, and that, although the underlying pathophysiology may often involve NMDA-receptor (...) malfunction, many different malfunctions are possible, and each of these may result from any one of a number of different etiologies. (shrink)
This dissertation is based on the compositional model theoretic approach to natural language semantics that was initiated by Montague (1970) and developed by subsequent work. In this general approach, coordination and negation are treated following Keenan & Faltz (1978, 1985) using boolean algebras. As in Barwise & Cooper (1981) noun phrases uniformly denote objects in the boolean domain of generalized quanti®ers. These foundational assumptions, although elegant and minimalistic, are challenged by various phenomena of coordination, plurality and scope. The (...) dissertation solves these problems by developing a ¯exible process of meaning composition, as ®rst proposed by Partee & Rooth (1983). Flexible interpretation involves semantic operations without any phonological counterpart, which participate in the interpretation process and change meanings of overt expressions. The dissertation introduces a novel ¯exible system where a small number of operations describe the behaviour of complex phenomena such as `non-boolean' and, the scope of inde®nites and the semantics of collectivity with quanti®cational NPs. The proposed theory is based on a distinction between two features of meanings in natural language. (shrink)
The target article presents a model for schizophrenia extending four levels of abstraction: molecules, cells, cognition, and syndrome. An important notion in the model is that of coordination, applicable to both the level of cells and of cognition. The molecular level provides an “implementation” of the coordination at the cellular level, which in turn underlies the coordination at the cognitive level, giving rise to the clinical symptoms.
We calculate the Lebesgueâmeasures of the stability sets of Nash-equilibria in pure coordination games. The results allow us to observe that the ordering induced by the Lebesgueâmeasure of stability sets upon strict Nash-equilibria does not necessarily agree with the ordering induced by riskâdominance. Accordingly, an equilibrium selection theory based on the Lebesgueâmeasure of stability sets would be necessarily different from one which uses the Nash-property as a point of orientation.
The Phillips & Silverstein model of NMDA-mediated coordination deficits provides a useful heuristic for the study of schizophrenic cognition. However, the model does not specifically account for the development of schizophrenia-spectrum disorders. The P&S model is compared to Meehl's seminal model of schizotaxia, schizotypy, and schizophrenia, as well as the model of schizophrenic cognitive dysfunction posited by McCarley and colleagues.
Consideration of color alone can give a misleading impression of the three approaches to category coordination: the nativist, empiricist and culturalist models. Empiricist models can benefit from a wider range of correlational information in the environment. Also, all three approaches may explain a set of perceptual categories within the human repertoire. Finally, a suggestion is offered for supplementing the naming game by varying the social status of agents.
Neurophysiological investigations of the past two decades have consistently demonstrated a deficit in sensory gating associated with schizophrenia. Phillips & Silverstein interpret this impairment as being consistent with cognitive coordination dysfunction. However, the physiological mechanisms that underlie sensory gating have not been shown to involve gamma-band oscillations or NMDA-receptors, both of which are critical neural elements in the cognitive coordination model.
This paper is concerned with adaptive learning and coordination processes. Implementing agent-based modeling techniques (Learning Classifier Systems, LCS), we focus on the twofold impact of cognitive and environmental complexity on learning and coordination. Within this framework, we introduce the notion of Adaptive Learning Agent with Rule-based Memory (ALARM), which is a particular class of Artificial Adaptive Agent (AAA, Holland and Miller 1991). We show that equilibrium is approached to a high degree, but never perfectly reached. We also demonstrate (...) that memorization and learning capacities depend upon the relative discordance between the cognitive complexity of agents' mental models and the degree of stability of the environment. (shrink)
Although interesting, the hypotheses proposed by Phillips & Silverstein lack unifying structure both in specific mechanisms and in cited evidence. They provide little to support the notion that low-level sensory processing and high-level cognitive coordination share dynamic grouping by synchrony as a common processing mechanism. We suggest that more realistic large-scale modeling at multiple levels is needed to address these issues.
What insights does comparative biology provide for furthering scienti¿ c understanding of the evolution of dynamic coordination? Our discussions covered three major themes: (a) the fundamental unity in functional aspects of neurons, neural circuits, and neural computations across the animal kingdom; (b) brain organization –behavior relationships across animal taxa; and (c) the need for broadly comparative studies of the relationship of neural structures, neural functions, and behavioral coordination. Below we present an overview of neural machinery and computations that (...) are shared by all nervous systems across the animal kingdom, and the related fact that there really are no “simple” relationships in coordination between nervous systems and the behavior they produce. The simplest relationships seen in living organisms are already fairly complex by computational standards. These realizations led us to think about ways that brain similarities and differences could be used to produce new insights into complex brain–behavior phenomena (including a critical appraisal of the roles of cortical and noncortical structures in mammalian behavior), and to think brieÀy about how future studies could best exploit comparative methods to elucidate better general principles underlying the neural mechanisms associated with behavioral coordination. In our view, it is unlikely that the intricacies interrelating neural and behavioral coordination are due to one particular manifestation (such as neural oscillation or the possession of a six-layered cortex). Instead of considering the human cortex to be the standard against which all things are measured (and thus something to crow about), both broad and focused comparative studies on behavioral similarities and differences will be necessary to elucidate the fundamental principles underlying dynamic coordination. (shrink)
ChickenHawk is a social-dilemma game that distinguishes uncoordinated from coordinated cooperation. In tests with players belonging to a culturally homogeneous population, natural-language cheap talk led to efficient coordination, while nonlinguistic signaling yielded uncoordinated altruism. In a subsequent test with players from a moderately more heterogeneous population nearby, the cheap talk condition still produced better coordination than other signaling conditions, but at a lower level and with fewer acts of altruism overall. Implications are: (1) without language, even willing cooperators (...) coordinate poorly; (2) given a sufficiently homogeneous social group, language can coordinate cooperation in the face of opportunities for anonymous defection; (3) coordination therefore depends not on merely a general propensity to cooperate but on the overlap of social identities, which are always costly to acquire and maintain. So far as linguistic variation establishes how much social identities overlap, natural-language cheap talk is self-insuring, suggesting that linguistic variation is itself adaptive. (shrink)
The disagreement about intertemporal coordination between Austrians and Keynesians is explained pointing out to differences both in the way expectations and motivations are treated and the methodological principles assumed by each view. Austrians believe that research should proceed showing first what guarantees a successful coordination in individualsŠ plans, and only latter showing which could hinder the “natural” course. Keynes, on the contrary, do not start with any ideal state of affairs, but allows economies to work either “good” or (...) “bad” according to the prevailing expectations among entrepreneurs. In this way uncertainty and expectations are fully incorporated into economic theory. A link between Austrian approach and popperian situational analysis is suggested. (shrink)
On many occasions, individuals are able to coordinate their actions. The first empirical evidence to this effect has been described by Schelling (1960) in an informal experiment. His results were corroborated many years later by Mehta et al. (1994a,b) and Bacharach and Bernasconi (1997). From the point of view of mainstream game theory, the success of individuals in coordinating their actions is something of a mystery. If there are two or more strict Nash equilibria, mainstream game theory has no means (...) of explaining why people tend to choose their part of one and the same equilibrium. Textbooks (see, e.g., Rasmusen, 1989 and Kreps, 1990) refer to the fact that players may use focal points (see Schelling (1960)) or social conventions (see Lewis (1969)). Both notions cannot easily be incorporated into mainstream game theory, however. The notion of social conventions has recently been extensively studied in the context of evolutionary game theory where a population of agents interacts with each other. The central focus of this paper, however, is on situations where a few players play a game only once and I study how they may coordinate their actions. (shrink)
Abstract In the ideal market of general equilibrium theory, choices are made in full knowledge of one another, and all expectations are fulfilled. This pre?harmonization of individual plans does not occur in real?world markets where decisions must be taken in ignorance of one another. The Austrian school grants this, but claims that real?world price systems are nonetheless effective in coordinating saving and investment decisions, which are motivated by disparate considerations. In contrast, Keynes held that without the pre?reconciliation of individual plans, (...) investment and employment would be less than optimal, and the resulting distribution of income arbitrary and inequitable. (shrink)
What is the relation between norms (in the sense of ?socially accepted rules?) and conventions? A number of philosophers have suggested that there is some kind of conceptual or constitutive relation between them. Some hold that conventions are or entail special kinds of norms (the ?conventions-as-norms thesis?). Others hold that at least some norms are or entail special kinds of conventions (the ?norms-as-conventions thesis?). We argue that both theses are false. Norms and conventions are crucially different conceptually and functionally in (...) ways that make it the case that it is a serious mistake to try to assimilate them. They are crucially different conceptually in that whereas conventions are not normative and are behaviour dependent and desire dependent, norms are normative, behaviour independent, and desire independent. They are crucially different functionally in that whereas conventions principally serve the function of facilitating coordination, norms principally serve the function of making us accountable to one another. (shrink)
Although ‘Rxx’ and ‘Rxy’ are both applications of a two-place predicate to a pair of terms, ‘Rxx’ resembles a one-place predicate in that all one needs to evaluate it is an assignment to ‘x’. A similar point applies to the sequences ‘Fx’, ‘Gx’ and ‘Fx’, ‘Gy’ – even though neither is a one-place predicate. Kit Fine’s semantic relationalism aims to extract a common idea uniting these comparisons, and to use it to provide a Millian solution to Frege’s Puzzle.
Humans are closely coupled with their environments. They rely on being ëembeddedí to help coordinate the use of their internal cognitive resources with external tools and resources. Consequently, everyday cognition, even cognition in the absence, may be viewed as partially distributed. As cognitive scientists our job is to discover and explain the principles governing this distribution: principles of coordination, externalization, and interaction. As designers our job is to use these principles, especially if they can be converted to metrics, in (...) order to invent and evaluate candidate designs. After discussing a few principles of interaction and embedding I discuss the usefulness of a range of metrics derived from economics, computational complexity and psychology. (shrink)
Enactive approaches foreground the role of interpersonal interaction in explanations of social understanding. This motivates, in combination with a recent interest in neuroscientific studies involving actual interactions, the question of how interactive processes relate to neural mechanisms involved in social understanding. We introduce the Interactive Brain Hypothesis (IBH) in order to help map the spectrum of possible relations between social interaction and neural processes. The hypothesis states that interactive experience and skills play enabling roles in both the development and current (...) function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction. We examine the plausibility of this hypothesis against developmental and neurobiological evidence and contrast it with the widespread assumption that mindreading is crucial to all social cognition. We describe the elements of social interaction that bear most directly on this hypothesis and discuss the empirical possibilities open to social neuroscience. We propose that the link between coordination dynamics and social understanding can be best grasped by studying transitions between states of coordination. These transitions form part of the self-organization of interaction processes that characterize the dynamics of social engagement. The patterns and synergies of this self-organization help explain how individuals understand each other. Various possibilities for role-taking emerge during interaction, determining a spectrum of participation. This view contrasts sharply with the observational stance that has guided research in social neuroscience until recently. We also introduce the concept of readiness to interact to describe the practices and dispositions that are summoned in situations of social significance (even if not interactive). This latter idea links interactive factors to more classical observational scenarios. (shrink)
This paper begins by raising a puzzle about what function our use of the word ‘rational’ could serve. To solve the puzzle, I introduce a view I call Epistemic Communism: we use epistemic evaluations to promote coordination among our basic belief-forming rules, and the function of this is to make the acquisition of knowledge by testimony more efficient.
This paper engages the extended cognition controversy by advancing a theory which fits nicely into an attractive and surprisingly unoccupied conceptual niche situated comfortably between traditional individualism and the radical externalism espoused by the majority of supporters of the extended mind hypothesis. I call this theory moderate active externalism, or MAE. In alliance with other externalist theories of cognition, MAE is committed to the view that certain cognitive processes extend across brain, body, and world—a conclusion which follows from a theory (...) I develop in “Synergic Coordination: an argument for cognitive process externalism.” Yet, in contradistinction with radical externalism, and in agreement with the internalist orthodoxy, MAE defends the view that mental states are situated invariably inside our heads. This is done, inter alia, by developing a novel hypothesis regarding the vehicles of content (in “Extended cognition without externalized mental states”, and by criticizing arguments in support of mental states externalism (in “Reflections and objections”). The result, I believe, is a coherent theoretical alternative worthy of serious consideration. (shrink)
The article argues against the common notion ofdisciplinary medical traditions, i.e. Obstetrics, asmacro-structures that quite unilinearily structure thepractices associated with the discipline. It shows that the various existences of Obstetrics, their relations with practices and vice versa, the entities these obstetrical practices render present and related, and the ways they are connected to experiences, are more complex than the unilinear model suggests. What allows participants to go from one topos to another – from Obstetrics to practice, from practice to politics, (...) from politics to experience – is not self-evidently induced by Obstetrics, but needs to be studied as a surprising range of passages that connect (or don't). Techniques and devices to supervise the delivery, to render present the fetus during pregnancy, and to monitoring birth, are described in order to show that such techniques acquire different roles in connecting and creating Obstetrics as a system andobstetrical practices. (shrink)
While language is presumably unique to humans, there are possible pre-linguistic features that developed in the course of human evolution which predate features of language, and might have even been essential for its evolution. A number of such possible preadaptations for human language have been discussed, like the permanent lowering of the larynx, the ability to control one’s breath, or the inclination of humans to imitate. In this paper I would like to point out another candidate for a preadaptation, namely (...) the functional differentiation of the hands and the way in which they cooperate in manual actions. (shrink)
A review of the scanty Gestaltist literature on motor behaviour indicates that a genuine Gestalt theoretic approach to motor behaviour can be characterized by three research questions: (1) What are the natural units of motor behaviour? (2) What characterizes the self-organization in motor behaviour? (3) What are the conditions for invariance in motor behaviour? Tentative answers to these questions can be found by analysing the parallels between Gestalt theory and Bernstein's theory of motor actions and by showing that Gestalt theory (...) can be regarded as a specific approach to non-linear dynamics as exemplified by synergetics (Haken, 1991). The congruence between the Gestalt theoretic approach and synergetics becomes apparent in the analysis of how a complex motor task is learned . (shrink)
Peter Carruthers correctly argues for a cognitive conception of the role of language. But such a story need not include the excess baggage of compositional inner codes, mental modules, mentalese, or translation into logical form (LF).
Sharing a public language facilitates particularly efficient forms of joint perception and action by giving interlocutors refined tools for directing attention and aligning conceptual models and action. We hypothesized that interlocutors who flexibly align their linguistic practices and converge on a shared language will improve their cooperative performance on joint tasks. To test this prediction, we employed a novel experimental design, in which pairs of participants cooperated linguistically to solve a perceptual task. We found that dyad members generally showed a (...) high propensity to adapt to each other’s linguistic practices. However, although general linguistic alignment did not have a positive effect on performance, the alignment of particular task-relevant vocabularies strongly correlated with collective performance. In other words, the more dyad members selectively aligned linguistic tools fit for the task, the better they performed. Our work thus uncovers the interplay between social dynamics and sensitivity to task affordances in successful cooperation. (shrink)
What is the proper unit of analysis in the psycholinguistics of dialog? While classical approaches are largely based on models of individual linguistic processing, recent advances stress the social coordinative nature of dialog. In the influential interactive alignment model, dialogue is thus approached as the progressive entrainment of interlocutors' linguistic behaviors toward the alignment of situation models. Still, the driving mechanisms are attributed to individual cognition in the form of automatic structural priming. Challenging these ideas, we outline a dynamical framework (...) for studying dialog based on the notion of interpersonal synergy. Crucial to this synergetic model is the emphasis on dialog as an emergent, self-organizing, interpersonal system capable of functional coordination. A consequence of this model is that linguistic processes cannot be reduced to the workings of individual cognitive systems but must be approached also at the interpersonal level. From the synergy model follows a number of new predictions: beyond simple synchrony, good dialog affords complementary dynamics, constrained by contextual sensitivity and functional specificity. We substantiate our arguments by reference to recent empirical studies supporting the idea of dialog as interpersonal synergy. (shrink)
This paper presents the results of an experiment on mutual versus common knowledge of advice in a two-player weak-link game with random matching. Our experimental subjects play in pairs for thirteen rounds. After a brief learning phase common to all treatments, we vary the knowledge levels associated with external advice given in the form of a suggestion to pick the strategy supporting the payoff-dominant equilibrium. Our results are somewhat surprising and can be summarized as follows: in all our treatments both (...) the choice of the efficiency-inducing action and the percentage of efficient equilibrium play are higher with respect to the control treatment, revealing that even a condition as weak as mutual knowledge of level 1 is sufficient to significantly increase the salience of the efficient equilibrium with respect to the absence of advice. Furthermore, and contrary to our hypothesis, mutual knowledge of level 2 induces, under suitable conditions, successful coordination more frequently than common knowledge. (shrink)
The properties of the phosphate uptake system of the cyanobacterium Anacystis nidulans have been studied during the transition from a phosphate-deficient non-growing state to a non-deficient growing state. In the phosphate-deficient state the high affinity phosphate transport system in the cell membrane is extremely adaptive. As a result of these adaptive features the phosphate transport system cannot be described by determinate, fixed parameters, because the transport system is influenced by the measurement of the uptake process itself. When the growing state (...) has been initiated by a persisting phosphate pulse, the transport system rapidly loses its adaptive features and can then be characterized by determinate parameters that remain unchanged for a long period of time, even if no uptake occurs in that time. Depending on the amount of phosphate stored during a pulse the cell makes a choice between slow or fast growth. In the latter case the light harvesting and energy converting machinery is completely reorganized before growth commences. Thereby the components of this machinery conform to each other and to the stable properties of the phosphate transport system. It is suggested that the mutual adjustment of these adaptive energy converting subunits is guided by attractors that function as the final cause for the development of the whole system.An application of this model to an analysis of the selforganization of aquatic ecosystems is discussed. (shrink)
Pickering & Garrod's (P&G's) theory of dialogue production cannot completely explain recent data showing that when interactants in referential communication tasks have different views of a physical space, they accommodate their language to their partner's view rather than mimicking their partner's expressions. Instead, these data are consistent with the hypothesis that interactants are taking the perspective of their conversational partners.
The increasing knowledge intensity of jobs, typical of a knowledge economy, highlights the role of firms as integrators of know-how and skills. As economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, firms need to allocate skills to tasks and traditional hierarchical control results increasingly ineffective. In this work, we explore under what circumstances networks of agents, which bear specific skills, may self-organize in order to complete tasks. We use a computer simulation approach and investigate (...) how local interaction of agents, endowed with skills and individual decision-making rules, may produce aggregate network structure able to perform tasks. To design algorithms that mimic individual decision-making, we borrow from computer science literature and, in particular, from studies addressing protocols that produce cooperation in Peer-to-Peer networks. We found that self-organization depends on the structural features of, formal or informal, organizational networks embedding both professionals, holding skills, and project managers, holding access to jobs. (shrink)
Individualists and externalists about language take themselves to be disagreeing about the basic subject matter of the study of language. Are linguistic facts are really facts about individuals, or really facts about language use in a community?The right answer to this question, I argue, is ‘Yes’. Both individualistic and social facts are crucial to a complete understanding of human language. The relationship between the theories inspired by these facts is analogous to the relationship between anatomy and ecology, or between micro- (...) and macro-economics: both types of facts are important objects of study in their own right, but we need a theory that accounts for the complex relationship between the two. I argue that modern extensions of the signaling-games approach of Lewis (1969) do just this, defusing the conflict while preserving the core positive insights of both sides of this debate.The upshot is that arguments for social externalism and the normativity of meaning pose no threat to individualist explanations and can be accountedfor within a naturalistic theory of language. A good externalist theory will make crucial reference to individualistic facts, but go further by examining language users’ interactions in a systematic way. (shrink)
The distinction between syntactic and semantic techniques in linguistic theory is by now sufficiently clear. What is often debated is the extent to which syntactic and semantic considerations should be used in analyzing a given phenomenon. An empirical domain where the division of labour between syntax and semantics is especially problematic is the case of ``non-overt'' scope, or what I prefer to call the..
In this essay, I explore a metaphor in geometry for the debate between the unity and the disunity of science, namely, the possibility of putting a global coordinate system (or a chart) on a manifold. I explain why the former is a good metaphor that shows what it means (and takes in principle) for science to be unified. I then go through some of the existing literature on the unity/disunity debate and show how the metaphor sheds light on some of (...) the views and arguments. (shrink)