The mechanisms/abilities of agents compared to the emergent outcomes in three different scenarios from my past work is summarised: the El Farol Game; an Artificial Stock Market; and the Iterated Prisoner’s Dilemma. Within each of these, the presence or absence of some different agent abilities was examined, the results being summarised here – some turning out to be necessary, some not. The ability in terms of the recognition of other agents, either by characteristics or by name is a recurring theme. (...) Some of the difficulties in reaching a systematic understanding of these connections is briefly discussed. (shrink)
We present a computational simulation which captures aspects of negotiation as the interaction of agents searching for an agreement over their own mental model. Specifically this simulation relates the beliefs of each agent about the action of cause and effect to the resulting negotiation dialogue. The model highlights the difference between negotiating to find any solution and negotiating to obtain the best solution from the point of view of each agent. The later case corresponds most closely to what is commonly (...) called “haggling”. This approach also highlights the importance of what each agent thinks is possible in terms of actions causing changes and in what the other agents are able to do in any situation. This simulation greatly extends other simulations of bargaining which usually only focus on the case of haggling over a limited number of numerical indexes. Three detailed examples are considered. The simulation framework is relatively well suited for participatory methods of elicitation since the “nodes and arrows” representation of beliefs is commonly used and thus to be accessible to stakeholders and domain experts. (shrink)
This paper argues that truth is by nature context-dependent – that no truth can be applied regardless of context. I call this “strong contextualism”. Some objections to this are considered and rejected, principally: that there are universal truths given to us by physics, logic and mathematics; and that claiming “no truths are universal” is self-defeating. Two “models” of truth are suggested to indicate that strong contextualism is coherent. It is suggested that some of the utility of the “universal framework” can (...) be recovered via a more limited “third person viewpoint”. Keywords: philosophy, universality, context, truth, knowledge. (shrink)
what is now the mainstream view as to the best way forward in the dream of engineering reliable software systems out of autonomous agents. The way of using formal logics to specify, implement and verify distributed systems of interacting units using a guiding analogy of beliefs, desires and intentions. The implicit message behind the book is this: Distributed Artificial Intelligence (DAI) can be a respectable engineering science. It says: we use sound formal systems; can cite established philosophical foundations; and will (...) be able to build reliable and flexible software systems. (shrink)
The paper considers the problem of how a distributed system of agents (who communicate only via a localised network) might achieve consensus by copying beliefs (copy) from each other and doing some belief pruning themselves (drop). This is explored using a social simulation model, where beliefs interact with each other via a compatibility function, which assigns a level of compatibility (which is a sort of weak consistency) to a set of beliefs. The probability of copy and drop processes occurring is (...) based on the increase in compatibility this process might result in. This allows for a process of collective consensus building whilst allowing for temporarily incompatible beliefs to be held by an agent. (shrink)
Prior theory – that is theorising on the basis of thought and intuition , as opposed to attempting to explain observed data – inevitably distorts what comes after. It biases us in the selection of our data (the data model) and certainly biases any theorising that follows. It does this because we (as humans) can not help but see the world through our theorising – we are blind without the theoretical “spectacles” described by Kuhn (1962). If a theory has shown (...) to be essentially correct in some domain (i.e. by thorough validation against the target problem or domain) using it as a framework can be helpful, however, if the theory is not mature or even speculative then it can effectively prevent progress . I argue that, although we can not ever completely avoid this sort of bias, we can minimise its effect. Two sources of prior theorising coming from opposite directions are sociology and formal systems – neither of these is inherently biased towards prior theorising, but just happens to be a source for such theorising at the present time. Computer scientists who project the results of interesting models onto society are also guilty of constructing first and fitting later. (shrink)
Some issues and varieties of computational and other approaches to understanding socially embedded phenomena are discussed. It is argued that of all the approaches currently available, only agent-based simulation holds out the prospect for adequately representing and understanding phenomena such as social norms.
Formidable difficulties face anyone trying to model social phenomena using a formal system, such as a computer program. The differences between formal systems and complex, multi-facetted and meaning-laden social systems are so fundamental that many will criticise any attempt to bridge this gap. Despite this, there are those who are so bullish about the project of social simulation that they appear to believe that simple computer models, that are also useful and reliable indicators of how aspects of society works, are (...) not only possible but within our grasp. This paper seeks to pour water on such optimism but, on the other hand, show that useful computational models might be ‘evolved’ In this way it is disagreeing with both naive positivist and relativistic post-modernist positions. However this will require a greater ‘selective pressure’ against models that are not grounded in evidence, ‘floating models’, and will result in a plethora of complex and context-specific models. (shrink)
An investigation into the conditions conducive to the emergence of heterogeneity amoung agents is presented. This is done by using a model of creative artificial agents to investigate some of the possibilities. The simulation is based on Brian Arthur's 'El Farol Bar' model but extended so that the agents also learn and communicate. The learning and communication is implemented using an evolutionary process acting upon a population of strategies inside each agent. This evolutionary learning process is based on a Genetic (...) Programming algorithm. This is chosen to make the agents as creative as possible and thus allow the outside edge of the simulation trajectory to be explored. A detailed case study from the simulations show how the agents have differentiated so that by the end of the run they had taken on qualitatively different roles. It provides some evidence that the introduction of a flexible learning process and an expressive internal representation has facilitated the emergence of this heterogeneity. (shrink)
Determinism is the thesis that a future state is completely determined by a past state of something - thus its future course is fixed when the initial state is given. Before the discovery of quantum mechanics many people thought the universe was deterministic; rather like a huge clock. Indeterminacy is when something is NOT deterministic, that is the initial state does not completely determine all subsequent ones. Indeterminacy is an important topic and doubly so for those involved in social simulation. (...) Due my interest in these issues and the rather low level of insight provided by the chapters in this book, I will briefly summarise some of the issues as they relate to social simulation first, and relate it to the book later. (shrink)
The use of context can considerably facilitate reasoning by restricting the beliefs reasoned upon to those relevant and providing extra information specific to the context. Despite the use and formalization of context being extensively studied both in AI and ML, context has not been much utilized in agents. This may be because many agents are only applied in a single context, and so these aspects are implicit in their design, or it may be that the need to explicitly encode information (...) about various contexts is onerous. An algorithm to learn the appropriate context along with knowledge relevant to that context gets around these difficulties and opens the way for the exploitation of context in agent design. The algorithm is described and the agents compared with agents that learn and apply knowledge in a generic way within an artificial stock market. The potential for context as a principled manner of closely integrating crisp reasoning and fuzzy learning is discussed. (shrink)
In recent years there has been an explosion of published literature utilising Multi-Agent-Based Simulation (MABS) to study social, biological and artificial systems. This kind of work is evidenced within JASSS but is increasingly becoming part of mainstream practice across many disciplines.
The perspective of modelling agents rather than using them for a specificed purpose entails a difference in approach. In particular an emphasis on veracity as opposed to efficiency. An approach using evolving populations of mental models is described that goes some way to meet these concerns. It is then argued that social intelligence is not merely intelligence plus interaction but should allow for individual relationships to develop between agents. This means that, at least, agents must be able to distinguish, identify, (...) model and address other agents, either individually or in groups. In other words that purely homogeneous interaction is insufficient. Two example models are described that illustrate these concerns, the second in detail where agents act and communicate socially, where this is determined by the evolution of their mental models. Finally some problems that arise in the interpretation of such simulations is discussed. (shrink)
We consider here issues of open access to social simulations, with a particular focus on software licences, though also briefly discussing documentation and archiving. Without any specific software licence, the default arrangements are stipulated by the Berne Convention (for those countries adopting it), and are unsuitable for software to be used as part of the scientific process (i.e. simulation software used to generate conclusions that are to be considered part of the scientific domain of discourse). Without stipulating any specific software (...) licence, we suggest rights that should be provided by any candidate licence for social simulation software, and provide in an appendix an evaluation of some popularly used licences against these criteria. (shrink)
This book is an archetypal product of the Belief-Desire-Intention (BDI) school of multi-agent systems. It presents what is now the mainstream view as to the best way forward in the dream of engineering reliable software systems out of autonomous agents. The way of using formal logics to specify, implement and verify distributed systems of interacting units using a guiding analogy of beliefs, desires and intentions. The implicit message behind the book is this: Distributed Artificial Intelligence (DAI) can be a respectable (...) engineering science. It says: we use sound formal systems; can cite established philosophical foundations; and will be able to build reliable and flexible software systems. (shrink)
A published simulation model Riolo et al. 2001 ) was replicated in two independent implementations so that the results as well as the conceptual design align. This double replication allowed the original to be analysed and critiqued with confidence. In this case, the replication revealed some weaknesses in the original model, which otherwise might not have come to light. This shows that unreplicated simulation models and their results can not be trusted - as with other kinds of experiment, simulations need (...) to be independently replicated. (shrink)
The SDML programming language which is optimized for modelling multi-agent interaction within articulated social structures such as organizations is described with several examples of its functionality. SDML is a strictly declarative modelling language which has object-oriented features and corresponds to a fragment of strongly grounded autoepistemic logic. The virtues of SDML include the ease of building complex models and the facility for representing agents flexibly as models of cognition as well as modularity and code reusability.
In this paper I will argue that, in general, where the evidence supports two theories equally, the simpler theory is not more likely to be true and is not likely to be nearer the truth. In other words simplicity does not tell us anything about model bias. Our preference for simpler theories (apart from their obvious pragmatic advantages) can be explained by the facts that humans are known to elaborate unsuccessful theories rather than attempt a thorough revision and that a (...) fixed set of data can only justify adjusting a certain number of parameters to a limited degree of precision. No extra tendency towards simplicity in the natural world is necessary to explain our preference for simpler theories. Thus Occam's razor eliminates itself (when interpreted in this form). (shrink)
It is a lie: nature is not balanced, but tumbling forwards in a damp confusion of forms. Not so much a comforting friend as a science-fiction monster: adsorbing all the bullets we shoot at it – each time getting up and coming back at us; each time further mutated and more terrifying.
The main search engine has been changed (including RSS queries). The default logic operator is "OR" now (may look strange when results are sorted by year). Please report all noticed errors, misfeatures or omissions you notice. Thanks!
This book is an argument for the importance of diversity in society. It is not naive, in the sense that it does not argue that any diversity is helpful, but rather tries to distinguish some of the ways in which it can be helpful and, hence, some the conditions under which it can be helpful. It does this is a largely non technical language and using informal argument using argument, examples and a review of the evidence to support its conclusions. (...) It ends with some policy suggestions, particularly in terms of university admission and job recruitment. (shrink)
This paper presents a evolutionary simulation where the presence of 'tags' and an inbuilt specialisation in terms of skills result in the development of 'symbiotic' sharing within groups of individuals with similar tags. It is shown that the greater the number of possible sharing occasions there are the higher the population that is able to be sustained using the same level of resources. The 'life-cycle' of a particular cluster of tag-groups is illustrated showing: the establishment of sharing; a focusing-in of (...) the cluster; the exploitation of the group by a particular skill-group and the waning of the group. This simulation differs from other tag-based models in that is does not rely on either the forced donation of resources to individuals with the same tag and where the tolerance mechanism plays a significant part. These 'symbiotic' groups could provide the structure necessary for the true emergence of artificial societies, supporting a division of labour similar to that found in human societies. (shrink)
The paper investigates what is meant by "good science" and "bad science" and how these differ as between the natural (physical and biological) sciences on the one hand and social sciences on the other. We conclude on the basis of historical evidence that the natural science are much more heavily constrained by evidence and observation than by theory while the social sciences are constrained by prior theory and hardly at all by direct evidence. Current examples of the latter proposition are (...) taken from recent issues of leading social science journals. We argue that agent based social simulations can be used as a tool to constrain the development of a new social science by direct (what economists dismiss as anecdotal) evidence and that to do so would make social science relevant to the understanding and influencing of social processes. We argue that such a development is both possible and desirable. We do not argue that it is likely. (shrink)
We highlight the limitations of formal methods by exhibiting two results in recursive function theory: that there is no effective means of finding a program that satisfies a given formal specification; or checking that a program meets a specification. We also exhibit a ‘simple’ MAS which has all the power of a Turing machine. We then argue that any ‘pure design’ methodology will face insurmountable difficulties in today’s open and complex MAS. Rather we suggest a methodology based on the classic (...) experimental method – that is ‘scientific foundations’ for the construction and control of complex MAS. (shrink)
The aim of this paper is to re-emphasise that the purpose of formal systems is to provide something to map into and to stem the tide of unjustified formal systems. I start by arguing that expressiveness alone is not a sufficient justification for a new formal system but that it must be justified on pragmatic grounds. I then deal with a possible objection as might be raised by a pure mathematician and after that to the objection that theory can be (...) later used by more specific models. I go on to compare two different methods of developing new formal systems: by a priori principles and intuitions; and by post hoc generalisation from data and examples. I briefly describe the phenomena of “social embedding” and use it to explain the social processes that underpin “normal” and “revolutionary” science. This suggests social grounds for the popularity of normal science. I characterise the “foundational” and “empirical” approaches to the use of formal systems and situate these with respect to “normal” and “revolutionary” modes of science. I suggest that successful sciences (in the sense of developing relevant mappings to formal systems) bare either more tolerant of revolutionary ideas or this tolerance is part of the reason they are successful. I finish by enumerating a number of ‘tell-tale’ signs that a paper is presenting an unjustified formal system. (shrink)
I claim that in order to pass the Turing Test over any period of extended time, it will necessary to embed the entity into society. This chapter discusses why this is, and how it might be brought about. I start by arguing that intelligence is better characterised by tests of social interaction, especially in open-ended and extended situations. I then argue that learning is an essential component of intelligence and hence that a universal intelligence is impossible. These two arguments support (...) the relevance of Turing Test as a particular but appropriate test of interactive intelligence. I look to the human case to argue that individual intelligence utilises society to a considerable extent for its development. Taking a lead from the human case I outline how a socially embedded artificial intelligence might be brought about in terms of four aspects: free-will, emotion, empathy and self-modelling. In each case I try to specify what social ‘hooks’ might be required in order for the full ability to develop during a considerable period of in situ acculturation. The chapter ends by speculating what it might be like to live with the result. (shrink)
The use of MABS (Multi-Agent Based Simulations) is analysed as the modelling of distributed (usually social) systems using MAS as the model structure. It is argued that rarely is direct modelling of target systems attempted but rather an abstraction of the target systems is modelled and insights gained about the abstraction then applied back to the target systems. The MABS modelling process is divided into six steps: abstraction, design, inference, analysis, interpretation and application. Some types of MABS papers are characterised (...) in terms of the steps they focus on and some criteria for good MABS formulated in terms of the soundness with which the steps are established. Finally some practical proposals that might improve the informativeness of the field are suggested. (shrink)
Genetic Programming (GP) is a technique which permits automatic search for complex solutions using a computer. It goes beyond previous techniques in that it discovers the structure of those solutions. Previously, if one were trying to find an equation to fit a set of data, one would have had to provide the form of the equation (for example a fourth degree polynomial) and the computer could then find the appropriate parameters. By contrast, GP can experiment with a whole range of (...) different functional forms , building equations from a menu of functions, symbols and arithmetic operations. Thus GP can be seen as an essentially creative technique. It is good at finding novel solutions where not much is known about the form of the solution.. (shrink)
email@example.com http://bruce.edmonds.name Abstract. Two kinds of problem are distinguished: the first of finding processes which produce complex outcomes from the interaction of simple parts, and the second of finding which process resulted in an observed complex outcome. The former I call the easy complexity problem and the later the hard complexity problem. It is often assumed that progress with the easy problem will aid process with the hard problem. However this assumes that the “reverse engineering” problem, of determining the process (...) from the outcomes is feasible. Taking a couple of simple models of reverse engineering, I show that this task is infeasible in the general case. Hence it cannot be assumed that reverse engineering is possible, and hence that most of the time progress on the easy problem will not help with the hard problem unless there are special properties of a particular set of processes that make it feasible. Assuming that complexity science is not merely an academic “game” and given the analysis of this paper, some criteria for the kinds of paper that have a reasonable chance of being eventually useful for understanding observed complex systems are outlined. Many complexity papers do not fare well against these critieria. (shrink)
We present a computational simulation which captures aspects of negotiation as the interaction of agents searching for an agreement over their own mental model. Specifically this simulation relates the beliefs of each agent about the action of cause and effect to the resulting negotiation dialogue. The model highlights the difference between negotiating to find any solution and negotiating to obtain the best solution from the point of view of each agent. The later case corresponds most closely to what is commonly (...) called "haggling". This approach also highlights the importance of what each agent thinks is possible in terms of actions causing changes and in what the other agents are able to do in any situation to the course and outcome of a negotiation. This simulation greatly extends other simulations of bargaining which usually only focus on the case of haggling over a limited number of numerical indexes. Three detailed examples are considered. The simulation framework is relatively well suited for participatory methods of elicitation since the "nodes and arrows" representation of beliefs is commonly used and thus accessible to stakeholders and domain experts. (shrink)
Finding suitable analysis techniques for networks generated from social processes is a difficult task when the population changes over time. Traditional social network analysis measures may not work in such circumstances. It is argued that agent-based social networks should not be constrained by a priori assumptions about the evolved network and/or the analysis techniques. In most agent-based social simulation models, the number of agents remains fixed throughout the simulation; this paper considers the case when this does not hold. Thus the (...) aim of this paper is to demonstrate how the network signatures change when the agents’ population depends upon endogenous social processes. We argue for a much wider attention from the social simulation community in addressing this open research problem. (shrink)
It is argued that given the “anti-anthropomorphic” principle—that the universe is not structured for our benefit—modelling trade-offs will necessarily mean that many of our models will be context-specific. It is argued that context-specificity is not the same as relativism. The “context heuristic”—that of dividing processing into rich, fuzzy context-recognition and crisp, conscious reasoning and learning—is outlined. The consequences of accepting the impact of this human heuristic in the light of the necessity of accepting context-specificity in our modelling of complex systems (...) is examined. In particular the development of “islands” or related model clusters rather than over-arching laws and theories. It is suggested that by accepting and dealing with context (rather than ignoring it) we can push the boundaries of science a little further. (shrink)
The notion of quality is analysed for its functional roots as a social heuristic for reusing others’ quality judgements and hence aiding choice. This is applied to the context of academic publishing, where the costs of publishing have greatly decreased, but the problem of finding the papers one wants has become harder. This paper suggests that instead of relying on generic quality judgements, such as those delivered by journal reviewers, that the maximum amount of judgemental information be preserved and then (...) made available to potential readers to help them find papers that meet their particular needs. The suggestion is that: multidimensional quality data be captured on review of papers, this information is stored on a database, and then used to filter papers according to the criteria set by the searcher—personalising the quality filter. In other words the quality judgements and subsequent use are maintained in a disaggregated form, maintaining the maximum informational context of the judgements for future use. The advantages, disadvantages, challenges and possible variations of this proposal are discussed. (shrink)
“The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.” Simon Newcomb, Professor of Mathematics, John Hopkins University, 1901 Abstract Free will is described in terms of the useful properties that it could confer, explaining why it (...) might have been selected for over the course of evolution. These are: exterior unpredictability; interior rationality; and social accountability. A process is described that might bring it about when deployed in a suitable social context. It is suggested that this process could itself be of an evolutionary nature – that free will might “evolve” in the brain during development. This mental evolution effectively separates the internal and external contexts, whilst retaining the coherency between individual’s public accounts of their actions. This is supported by the properties of evolutionary algorithms and possesses the three desired properties. Some objections to the possibility of free will are dealt with by pointing out the prima facie evidence and showing how an assumption that everything must be either deterministic or random can result from an unsupported assumption of universalism. (shrink)
It is argued that complexity is not attributable directly to systems or processes but rather to the descriptions of their `best' models, to reflect their difficulty. Thus it is relative to the modelling language and type of difficulty. This approach to complexity is situated in a model of modelling. Such an approach makes sense of a number of aspects of scientific modelling: complexity is not situated between order and disorder; noise can be explicated by approaches to excess modelling error; and (...) simplicity is not truth indicative but a useful heuristic when models are produced by a being with a tendency to elaborate in the face of error. (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently (...) a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
Some practical criteria for free-will are suggested where free-will is a matter of degree. It is argued that these are more appropriate than some extremely idealised conceptions. Thus although the paper takes lessons from philosophy it avoids idealistic approaches as irrelevant. A mechanism for allowing an agent to meet these criteria is suggested: that of facilitating the gradual emergence of free-will in the brain via an internal evolutionary process. This meets the requirement that not only must the choice of action (...) be free but also choice in the method of choice, and choice in the method of choice of the method of choice etc. This is directly analogous to the emergence of life from non-life. Such an emergence of indeterminism with respect to the conditions of the agent fits well with the `Machiavellian Intelligence Hypothesis' which posits that our intelligence evolved (at least partially) to enable us to deal with social complexity and modelling `arms races'. There is a clear evolutionary advantage in being internally coherent in seeking to fulfil ones goals and unpredictable by ones peers. To fully achieve this vision several other aspects of cognition are necessary: open-ended strategy development; the meta-evolution of the evolutionary process; the facility to anticipate the results of strategies; and the situating of this process in a society of competitive peers. Finally the requirement that reports of the deliberations that lead to actions need to be socially acceptable leads to the suggestion that the language that the strategies are developed within be subject to a normative process in parallel with the development of free-will. An appendix outlines a philosophical position in support of my position. (shrink)
The reductionist/holist debate is highly polarised. I propose an intermediate position of pragmatic holism. It derives from two claims: firstly, that irrespective of whether all natural systems are theoretically reducible, for many systems it is utterly impractical to attempt such a reduction, and secondly, that regardless of whether irreducible 'wholes exist, it is vain to try and prove this. This position illuminates the debate along new pragmatic lines by refocussing attention on the underlying heuristics of learning about the natural world.
When modelling complex systems one can not include all the causal factors, but one has to settle for partial models. This is alright if the factors left out are either so constant that they can be ignored or one is able to recognise the circumstances when they will be such that the partial model applies. The transference of knowledge from the point of application to the point of learning utilises a combination of recognition and inference a simple model of (...) the important features is learnt and later situations where inferences can be drawn from the model are recognised. Context is an abstraction of the collection of background features that are later recognised. Different heuristics for recognition and model formulation will be effective for different learning tasks. Each of these will lead to a different type of context. Given this, there are (at least) two ways of modelling context: one can either attempt to investigate the contexts that arise out of the heuristics that a particular agent actually applies (the `internal' approach); or (if this is feasible) one can attempt to model context using the external source of regularity that the heuristics exploit. There are also two basic methodologies for the investigation of context: a top-down (or `foundationalist') approach where one tries to lay down general, a priori principles and a bottom-up (or `scientific') approach where one can try and find what sorts of context arise by experiment and simulation. A simulation is exhibited which is designed to illustrate the practicality of the bottom-up approach in elucidating the sorts of internal context that arise in an artificial agent which is attempting to learn simple models of a complex environment. It ends with a plea for the cooperation of the AI and Machine Learning communities as both learning and inference is needed if context is to make complete sense. (shrink)
The reductionist/holist debate seems an impoverished one, with many participants appearing to adopt a position first and constructing rationalisations second. Here I propose an intermediate position of pragmatic holism, that irrespective of whether all natural systems are theoretically reducible, for many systems it is completely impractical to attempt such a reduction, also that regardless if whether irreducible `wholes' exist, it is vain to try and prove this in absolute terms. This position thus illuminates the debate along new pragmatic lines, and (...) refocusses attention on the underlying heuristics of learning about the natural world. (shrink)