The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the (...) relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. (shrink)
We analyze different aspects of our quantum modeling approach of human concepts and, more specifically, focus on the quantum effects of contextuality, interference, entanglement, and emergence, illustrating how each of them makes its appearance in specific situations of the dynamics of human concepts and their combinations. We point out the relation of our approach, which is based on an ontology of a concept as an entity in a state changing under influence of a context, with the main traditional concept (...) theories, that is, prototype theory, exemplar theory, and theory theory. We ponder about the question why quantum theory performs so well in its modeling of human concepts, and we shed light on this question by analyzing the role of complex amplitudes, showing how they allow to describe interference in the statistics of measurement outcomes, while in the traditional theories statistics of outcomes originates in classical probability weights, without the possibility of interference. The relevance of complex numbers, the appearance of entanglement, and the role of Fock space in explaining contextual emergence, all as unique features of the quantum modeling, are explicitly revealed in this article by analyzing human concepts and their dynamics. (shrink)
Interest in the computational aspects of modeling has been steadily growing in philosophy of science. This paper aims to advance the discussion by articulating the way in which modeling and computational errors are related and by explaining the significance of error management strategies for the rational reconstruction of scientific practice. To this end, we first characterize the role and nature of modeling error in relation to a recipe for model construction known as Euler’s recipe. We then describe (...) a general model that allows us to assess the quality of numerical solutions in terms of measures of computational errors that are completely interpretable in terms of modeling error. Finally, we emphasize that this type of error analysis involves forms of perturbation analysis that go beyond the basic model-theoretical and statistical/probabilistic tools typically used to characterize the scientific method; this demands that we revise and complement our reconstructive toolbox in a way that can affect our normative image of science. (shrink)
This study provides a basic introduction to agent-based modeling (ABM) as a powerful blend of classical and constructive mathematics, with a primary focus on its applicability for social science research. The typical goals of ABM social science researchers are discussed along with the culture-dish nature of their computer experiments. The applicability of ABM for science more generally is also considered, with special attention to physics. Finally, two distinct types of ABM applications are summarized in order to illustrate concretely the (...) duality of ABM: Real-world systems can not only be simulated with verisimilitude using ABM; they can also be efficiently and robustly designed and constructed on the basis of ABM principles. (shrink)
Over the last decade, fully distributed models have become dominant in connectionist psychological modelling, whereas the virtues of localist models have been underestimated. This target article illustrates some of the benefits of localist modelling. Localist models are characterized by the presence of localist representations rather than the absence of distributed representations. A generalized localist model is proposed that exhibits many of the properties of fully distributed models. It can be applied to a number of problems that are difficult for fully (...) distributed models, and its applicability can be extended through comparisons with a number of classic mathematical models of behaviour. There are reasons why localist models have been underused, though these often misconstrue the localist position. In particular, many conclusions about connectionist representation, based on neuroscientific observation, can be called into question. There are still some problems inherent in the application of fully distributed systems and some inadequacies in proposed solutions to these problems. In the domain of psychological modelling, localist modelling is to be preferred. Key Words: choice; competition; connectionist modelling; consolidation; distributed; localist; neural networks; reaction-time. (shrink)
Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling the hierarchical (...) nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way. (shrink)
Modeling of evolution and development of language has principally utilized mature units of spoken language, phonemes and words, as both targets and inputs. This approach cannot address the earliest phases of development because young infants are unable to produce such language features. We argue that units of early vocal development—protophones and their primitive illocutionary/perlocutionary forces—should be targeted in evolutionary modeling because they suggest likely units of hominin vocalization/communication shortly after the split from the chimpanzee/bonobo lineage, and because early (...) development of spontaneous vocal capability is a logically necessary step toward vocal language, a root capability without which other crucial steps toward vocal language capability are impossible. Modeling of language evolution/development must account for dynamic change in early communicative units of form/function across time. We argue for interactive contributions of sender/infants and receiver/caregivers in a feedback loop involving both development and evolution and propose to begin computational modeling at the hominin break from the primate communicative background. (shrink)
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence (...) for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O); Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; Ubiquitous predicates model the varying degrees to which items may suggest alignment; Structural evaluation of analogical inferences models aspects of plausibility judgments; Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. (shrink)
The Lotka–Volterra predator-prey-model is a widely known example of model-based science. Here we reexamine Vito Volterra’s and Umberto D’Ancona’s original publications on the model, and in particular their methodological reflections. On this basis we develop several ideas pertaining to the philosophical debate on the scientific practice of modeling. First, we show that Volterra and D’Ancona chose modeling because the problem in hand could not be approached by more direct methods such as causal inference. This suggests a philosophically insightful (...) motivation for choosing the strategy of modeling. Second, we show that the development of the model follows a trajectory from a “how possibly” to a “how actually” model. We discuss how and to what extent Volterra and D’Ancona were able to advance their model along that trajectory. It turns out they were unable to establish that their model was fully applicable to any system. Third, we consider another instance of model-based science: Darwin’s model of the origin and distribution of coral atolls in the Pacific Ocean. Darwin argued more successfully that his model faithfully represents the causal structure of the target system, and hence that it is a “how actually” model. (shrink)
Scientists confronted with multiple explanatory hypotheses as a result of their abductive inferences, generally want to reason further on the different hypotheses one by one. This paper presents a modal adaptive logic MLA s that enables us to model abduction in such a way that the different explanatory hypotheses can be derived individually. This modelling is illustrated with a case study on the different hypotheses on the origin of the Moon.
It is largely acknowledged that natural languages emerge not just from human brains but also from rich communities of interacting human brains (Senghas, ). Yet the precise role of such communities and such interaction in the emergence of core properties of language has largely gone uninvestigated in naturally emerging systems, leaving the few existing computational investigations of this issue at an artificial setting. Here, we take a step toward investigating the precise role of community structure in the emergence of linguistic (...) conventions with both naturalistic empirical data and computational modeling. We first show conventionalization of lexicons in two different classes of naturally emerging signed systems: (a) protolinguistic “homesigns” invented by linguistically isolated Deaf individuals, and (b) a natural sign language emerging in a recently formed rich Deaf community. We find that the latter conventionalized faster than the former. Second, we model conventionalization as a population of interacting individuals who adjust their probability of sign use in response to other individuals' actual sign use, following an independently motivated model of language learning (Yang, , ). Simulations suggest that a richer social network, like that of natural (signed) languages, conventionalizes faster than a sparser social network, like that of homesign systems. We discuss our behavioral and computational results in light of other work on language emergence, and other work of behavior on complex networks. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility (...) of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
The moral ideology of banking and insurance employees in Spain was examined along with supervisor role modeling and ethics-related policies and procedures for their association with ethical behavioral intent. In addition to main effects, we found evidence supporting that the person–situation interactionist perspective in supervisor role modeling had a stronger positive relationship with ethical intention among employees with relativist moral ideology. Also as hypothesized, formal ethical polices and procedures were positively related to ethical intention among those with universal (...) beliefs, but the relationship was much weaker among relativists. Thus, firms wishing to optimally promote ethical attitudes and behavior must tailor their organization-based initiatives to the individual characteristics of their employees. (shrink)
Experimental modeling in biology involves the use of living organisms (not necessarily so-called "model organisms") in order to model or simulate biological processes. I argue here that experimental modeling is a bona fide form of scientific modeling that plays an epistemic role that is distinct from that of ordinary biological experiments. What distinguishes them from ordinary experiments is that they use what I call "in vivo representations" where one kind of causal process is used to stand in (...) for a physically different kind of process. I discuss the advantages of this approach in the context of evolutionary biology. (shrink)
In recent years, the emergence of a new trend in contemporary philosophy has been observed in the increasing usage of empirical research methods to conduct philosophical inquiries. Although philosophers primarily use secondary data from other disciplines or apply quantitative methods (experiments, surveys, etc.), the rise of qualitative methods (e.g., in-depth interviews, participant observations and qualitative text analysis) can also be observed. In this paper, I focus on how qualitative research methods can be applied within philosophy of science, namely within the (...) philosophical debate on modeling. Specifically, I review my empirical investigations into the issues of model de-idealization, model justification and performativity. (shrink)
How can mathematical models which represent the causal structure of the world incompletely or incorrectly have any scientific value? I argue that this apparent puzzle is an artifact of a realist emphasis on representation in the philosophy of modeling. I offer an alternative, pragmatic methodology of modeling, inspired by classic papers by modelers themselves. The crux of the view is that models developed for purposes other than explanation may be justified without reference to their representational properties.
Many in philosophy understand truth in terms of precise semantic values, true propositions. Following Braun and Sider, I say that in this sense almost nothing we say is, literally, true. I take the stand that this account of truth nonetheless constitutes a vitally useful idealization in understanding many features of the structure of language. The Fregean problem discussed by Braun and Sider concerns issues about application of language to the world. In understanding these issues I propose an alternative modeling (...) tool summarized in the idea that inaccuracy of statements can be accommodated by their imprecision. This yields a pragmatist account of truth, but one not subject to the usual counterexamples. The account can also be viewed as an elaborated error theory. The paper addresses some prima facie objections and concludes with implications for how we address certain problems in philosophy. (shrink)
Experimental activity is traditionally identified with testing the empirical implications or numerical simulations of models against data. In critical reaction to the ‘tribunal view’ on experiments, this essay will show the constructive contribution of experimental activity to the processes of modeling and simulating. Based on the analysis of a case in fluid mechanics, it will focus specifically on two aspects. The first is the controversial specification of the conditions in which the data are to be obtained. The second is (...) conceptual clarification, with a redefinition of concepts central to the understanding of the phenomenon and the conditions of its occurrence. (shrink)
This article briefly review the fundamentals of structural equation modeling for readers unfamiliar with the technique then goes on to offer a review of the Martin and Cullen paper. In summary, a number of fit indices reported by the authors reveal that the data do not fit their theoretical model and thus the conclusion of the authors that the model was “promising” are unwarranted.
Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain “distance” from empirical reality. These features raise questions such as what models are and how they relate to the world. Recent years have seen a growing discussion of these issues, including a number of views that treat modeling in terms of indirect representation and analysis. Indirect views treat the model as (...) a bona fide object, specified by the modeler and used to represent and reason about some portion of the concrete empirical world. On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. Furthermore, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on a recent work by Stephen Yablo concerning the notion of partial truth. (shrink)
In this study we use a computational model of language learning called model of syntax acquisition in children (MOSAIC) to investigate the extent to which the optional infinitive (OI) phenomenon in Dutch and English can be explained in terms of a resource-limited distributional analysis of Dutch and English child-directed speech. The results show that the same version of MOSAIC is able to simulate changes in the pattern of finiteness marking in 2 children learning Dutch and 2 children learning English as (...) the average length of their utterances increases. These results suggest that it is possible to explain the key features of the OI phenomenon in both Dutch and English in terms of the interaction between an utterance-final bias in learning and the distributional characteristics of child-directed speech in the 2 languages. They also show how computational modeling techniques can be used to investigate the extent to which cross-linguistic similarities in the developmental data can be explained in terms of common processing constraints as opposed to innate knowledge of universal grammar. (shrink)
During the last decade, scholars have identified a number of factors that pose significant challenges to effective business ethics education. This article offers a "coping-modeling, problem-solving" (CMPS) approach (Cunningham, 2006) as one option for addressing these concerns. A rationale supporting the use of the CMPS framework for courses on ethical decisionmaking in business is provided, following which the implementation processes for this program are described. Evaluative data collected from N = 101 undergraduate business students enrolled in a third year (...) required course on ethical decision-making in business indicated that the CMPS model is a promising alternative for both overcoming teaching challenges and for facilitating skill acquisition in the areas of ethical recognition, judgment, and action. Limitations and directions for future research are discussed. (shrink)
In the legal domain, ontologies enjoy quite some reputation as a way to model normative knowledge about laws and jurisprudence. This paper describes the methodology followed when developing the ontology used by the second version of the prototype Iuriservice, a web-based intelligent FAQ for judicial use. This modeling methodology has had two important requirements: on the one hand, the ontology needed to be extracted from a repository of professional judicial knowledge (containing nearly 800 questions regarding daily practice). Thus, the (...) construction of ontologies of professional judicial knowledge demanded the description of this knowledge as it is perceived by the judge. On the other hand, due to the distributiveness of the environment, there was a need for controlled discussion and traceability of the arguments used in favor or against the introduction of a concept X as part of the domain ontology. This paper presents the Ontology of Professional Judicial Knowledge (OPJK), extracted manually from the selection of relevant terms from judicial practice questions and modeled according to the DILIGENT methodology. We will show that DILIGENT has proved to be a methodology that facilitates the ontology engineering in a distributed environment, although appropriate tool support needs to be developed. (shrink)
The article first addresses the importance of cognitive modeling, in terms of its value to cognitive science (as well as other social and behavioral sciences). In particular, it emphasizes the use of cognitive architectures in this undertaking. Based on this approach, the article addresses, in detail, the idea of a multi-level approach that ranges from social to neural levels. In physical sciences, a rigorous set of theories is a hierarchy of descriptions/explanations, in which causal relationships among entities at a (...) high level can be reduced to causal relationships among simpler entities at a more detailed level. We argue that a similar hierarchy makes possible an equally productive approach toward cognitive modeling. The levels of models that we conceive in relation to cognition include, at the highest level, sociological/anthropological models of collective human behavior, behavioral models of individual performance, cognitive models involving detailed mechanisms, representations, and processes, as well as biological/physiological models of neural circuits, brain regions, and other detailed biological processes. (shrink)
In this paper I argue that the appropriate analogy for “understanding what makes simulation results reliable” in global climate modeling is not with scientific experimentation or measurement, but—at least in the case of the use of global climate models for policy development—with the applications of science in applied design problems. The prospects for using this analogy to argue for the quantitative reliability of GCMs are assessed and compared with other potential strategies.
Richard Levins has advocated the scientific merits of qualitative modeling throughout his career. He believed an excessive and uncritical focus on emulating the models used by physicists and maximizing quantitative precision was hindering biological theorizing in particular. Greater emphasis on qualitative properties of modeled systems would help counteract this tendency, and Levins subsequently developed one method of qualitative modeling, loop analysis, to study a wide variety of biological phenomena. Qualitative modeling has been criticized for being conceptually and (...) methodologically problematic. As a clear example of a qualitative modeling method, loop analysis shows this criticism is indefensible. The method has, however, some serious limitations. This paper describes loop analysis, its limitations, and attempts to clarify the differences between quantitative and qualitative modeling, in content and objective. Loop analysis is but one of numerous types of qualitative analysis, so its limitations do not detract from the currently underappreciated and underdeveloped role qualitative modeling could have within science. (shrink)
Economics is a culturally and politically powerful and contested discipline, and it has been that way as long as it has existed. For some commentators, economics is the "queen of the social sciences", while others view it as a "dismal science" (and both of these epithets allow for diverse interpretations; see Mäki 2002). Economics is also a discipline that deals with a dynamically complex subject matter and has a tradition of reducing this complexity by using systematic procedures of simplification. Nowadays, (...) these procedures involve for the most part building and using mathematical models (for an overview of the philosophical issues, see Morgan and Knuuttila 2011). In the dominant circles of the discipline, one is not regarded as a serious economist having a professional expert view on any given economic or social issue without having a model about it. Much of the power of the discipline and its characteristic contestations therefore involve models and modelling: the successes and failures of the dismal queen are those of modeling. The issues involved in conomic modeling have been made particularly acute once again by the financial crisis of 2008-2009 and its aftermath: the discipline of economics is among the candidates for the major blame for failure. I will first outline some thoughts about the characteristics disciplinary conventions that guide and constrain modeling in economics. I will then summarize my account of the very ideas of models and modeling. Finally, within the framework of that account, I will highlight some major issues of contestation and sketch the respective notions of potential success and failure in economic modeling with illustrations. These notions are motivated by my subscription to a (flexible and discipline-sensitive) realist philosophy of science (e.g. Mäki 2005). (shrink)
Biological forms are very complex, and mechanisms of pattern formation are not well understood. Although developmental biology deals with the mechanistic explanation of patterns, currently we do not know how to understand the mechanisms of pattern formation from huge amounts of molecular information. In this article, I present one useful tool, mathematical modeling, to obtain a mechanistic understanding of biological pattern formation, and show an actual example in lung branching morphogenesis. In this example, mathematical modeling plays an indispensable (...) role in understanding the biological phenomena. The model successfully reproduces the basic features of morphogenesis in vitro, but mechanism in vivo still remains to be elucidated. (shrink)
The strategies of action employed by a human subject in order to perceive simple 2-D forms on the basis of tactile sensory feedback have been modelled by an explicit computer algorithm. The modelling process has been constrained and informed by the capacity of human subjects both to consciously describe their own strategies, and to apply explicit strategies; thus, the strategies effectively employed by the human subject have been influenced by the modelling process itself. On this basis, good qualitative and semi-quantitative (...) agreement has been achieved between the trajectories produced by a human subject, and the traces produced by a computer algorithm. The advantage of this reciprocal modelling option, besides facilitating agreement between the algorithm and the empirically observed trajectories, is that the theoretical model provides an explanation, and not just a description, of the active perception of the human subject. (shrink)
I argue for the causal character of modeling in data-intensive science, contrary to widespread claims that big data is only concerned with the search for correlations. After discussing the concept of data-intensive science and introducing two examples as illustration, several algorithms are examined. It is shown how they are able to identify causal relevance on the basis of eliminative induction and a related difference-making account of causation. I then situate data-intensive modeling within a broader framework of an epistemology (...) of scientific knowledge. In particular, it is shown to lack a pronounced hierarchical, nested structure. The significance of the transition to such “horizontal” modeling is underlined by the concurrent emergence of novel inductive methodology in statistics such as non-parametric statistics. Data-intensive modeling is well equipped to deal with various aspects of causal complexity arising especially in the higher level and applied sciences. (shrink)
The goal of cognitive modeling is to build faithful simulations of human cognition. One of the challenges is that multiple models can often explain the same phenomena. Another challenge is that models are often very hard to understand, explore, and reuse by others. We discuss some of the solutions that were discussed during the 2015 International Conference on Cognitive Modeling.
Inquiries into the nature of scientific modeling have tended to focus their attention on mathematical models and, relatedly, to think of nonconcrete models as mathematical structures. The arguments of this article are arguments for rethinking both tendencies. Nonmathematical models play an important role in the sciences, and our account of scientific modeling must accommodate that fact. One key to making such accommodations, moreover, is to recognize that one kind of thing we use the term ‘model’ to refer to (...) is a collection of propositions. (shrink)
Despite efforts from regulatory agencies (e.g. NIH, FDA), recent systematic reviews of randomised controlled trials (RCTs) show that top medical journals continue to publish trials without requiring authors to report details for readers to evaluate early stopping decisions carefully. This article presents a systematic way of modelling and simulating interim monitoring decisions of RCTs. By taking an approach that is both general and rigorous, the proposed framework models and evaluates early stopping decisions of RCTs based on a clear and consistent (...) set of criteria. The framework allows decision analysts to generate and quickly answer ‘what-if’ questions by simulating alternate trial scenarios. I illustrate the framework with a case study of an RCT that was stopped early due to harm. This was a trial of vitamin A supplement in relation to HIV transmission from mother-to-child through breastfeeding. (shrink)
The distinction between data and phenomena introduced by Bogen and Woodward (Philosophical Review 97(3):303–352, 1988) was meant to help accounting for scientific practice, especially in relation with scientific theory testing. Their article and the subsequent discussion is primarily viewed as internal to philosophy of science. We shall argue that the data/phenomena distinction can be used much more broadly in modelling processes in philosophy.
There are no universally adopted answers to the natural questions about scientific concepts: What are they? What is their structure? What are their functions? How many kinds of them are there? Do they change? Ironically, most if not all scientific monographs or articles mention concepts, but the scientific studies of scientific concepts are rare in occurrence. It is well known that the necessary stage of any scientific study is constructing the model of objects in question. Many years logical modeling (...) was dominant in the concept studies. Last decades, concepts came to be regarded as the subject of mathematical modeling. However, different authors take different features of concepts as independent variables of their models. Our objective is to characterize informally the spectra of relevant variables for the modeling of scientific concepts. (shrink)
Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources (...) of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling. (shrink)
Aristotle saw ethics as a habit that is modeled and developed though practice. Shelly's Victor Frankenstein, though well intentioned in his goals, failed to model ethical behavior for his creation, abandoning it to its own recourse. Today we live in an era of unfettered mergers and acquisitions where once separate and independent media increasingly are concentrated under the control and leadership of the fictitious but legal personhood of a few conglomerated corporations. This paper will explore the impact of mega-media mergers (...) on ethical modeling in journalism. It will diagram the behavioral context underlying the development of ethical habits, discuss leadership theory as it applies to management, and address the question of whether the creation of mega-media conglomerates will result in responsible corporate citizens or monsters who turn on their creators. (shrink)
A theory of vagueness gives a model of vague language and of reasoning within the language. Among the models that have been offered are Degree Theorists’ numerical models that assign values between 0 and 1 to sentences, rather than simply modelling sentences as true or false. In this paper, I ask whether we can benefit from employing a rich, well-understood numerical framework, while ignoring those aspects of it that impute a level of mathematical precision that is not present in the (...) modelled phenomenon of vagueness. Can we ignore apparent implications for the phenomena by pointing out that it is just a model and that the unwanted features are mere artefacts? I explore the distinction between representors and artefacts and criticise the strategy of appealing to features as mere artefacts in defence of a theory. I focus largely on theories using numerical resources, but also consider other, related theories and strategies, including theories appealing to non-linear structures. (shrink)
Consider a firm as an organization that needs to efficiently coordinate several specialized departments in an uncertain environment. Decision making involves collective planning sessions and decentralized operational processes. In this setting this paper explores the role of economic modeling through an experimental game. Results support the idea that economic modeling favors higher performance. Economic modeling facilitates the emergence of common knowledge and the decomposition of a group decision problem into individual decision problems that are meaningfully interrelated.
In order to be capable of exhibiting a wide range of cooperative behavior, a computer-based dialog system must have available assumptions about the current user's goals, plans, background knowledge and (false) beliefs, i.e., maintain a so-called “user model”. Apart from cooperativity aspects, such a model is also necessary for intelligent coherent dialog behavior in general. This article surveys recent research on the problem of how such a model can be constructed, represented and used by a system during its interaction with (...) the user. Possible applications, as well as potential problems concerning the advisability of application, are then discussed. Finally, a number of guidelines are presented which should be observed in future research to reduce the risk of a potential misuse of user modeling technology. (shrink)
This document discusses the status of research on detection and prevention of financial fraud undertaken as part of the IST European Commission funded FF POIROT (Financial Fraud Prevention Oriented Information Resources Using Ontology Technology) project. A first task has been the specification of the user requirements that define the functionality of the financial fraud ontology to be designed by the FF POIROT partners. It is claimed here that modeling fraudulent activity involves a mixture of law and facts as well (...) as inferences about facts present, facts presumed or facts missing. The purpose of this paper is to explain this abstract model and to specify the set of user requirements. (shrink)
Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must meet to achieve these objectives as predictive robustness—predictive reliability over large domains. Drawing on the results of an ethnographic investigation and various studies in the systems biology literature, I explore four current obstacles (...) to achieving predictive robustness; data constraints, parameter uncertainty, collaborative constraints and system-scale requirements. I use a case study and the commentary of systems biologists themselves to show that current practices in the field, rather than pursuing these goals, frequently use models heuristically to investigate and build understanding of biological systems that do not meet standards of predictive robustness but are nonetheless effective uses of computation. A more heuristic conception of modeling allows us to interpret current practices as ways that manage these obstacles more effectively, particularly collaborative constraints, such that modelers can in the long-run at least work towards prediction and control. (shrink)
In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing user interfaces. Methodological implications for knowledge engineering and for human-computer interaction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer interfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting with the computer.We show that in a first step (...) it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, user interfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computer interaction. (shrink)
In the multidisciplinary field of developmental cognitive neuroscience, statistical associations between levels of description play an increasingly important role. One example of such associations is the observation of correlations between relatively common gene variants and individual differences in behavior. It is perhaps surprising that such associations can be detected despite the remoteness of these levels of description, and the fact that behavior is the outcome of an extended developmental process involving interaction of the whole organism with a variable environment. Given (...) that they have been detected, how do such associations inform cognitive-level theories? To investigate this question, we employed a multiscale computational model of development, using a sample domain drawn from the field of language acquisition. The model comprised an artificial neural network model of past-tense acquisition trained using the backpropagation learning algorithm, extended to incorporate population modeling and genetic algorithms. It included five levels of description—four internal: genetic, network, neurocomputation, behavior; and one external: environment. Since the mechanistic assumptions of the model were known and its operation was relatively transparent, we could evaluate whether cross-level associations gave an accurate picture of causal processes. We established that associations could be detected between artificial genes and behavioral variation, even under polygenic assumptions of a many-to-one relationship between genes and neurocomputational parameters, and when an experience-dependent developmental process interceded between the action of genes and the emergence of behavior. We evaluated these associations with respect to their specificity, to their developmental stability, and to their replicability, as well as considering issues of missing heritability and gene–environment interactions. We argue that gene–behavior associations can inform cognitive theory with respect to effect size, specificity, and timing. The model demonstrates a means by which researchers can undertake multiscale modeling with respect to cognition and develop highly specific and complex hypotheses across multiple levels of description. (shrink)
Appropriate enablers are essential for management of intellectual capital. Through the use of structural equation modeling, we investigate whether organic renewal environments, interactive behaviors, and trust are conducive to intellectual capital management processes, as they each depend upon the establishment of a climate emphasizing mutual respect. Owing to a lack of clarity in the literature, we tested the ordering of the variables and found statistical significance for two ordering alternatives. However, the sequence presented in this article provides the best (...) statistical fit: an organic renewal environment provides a foundation for interactive behaviors, which leads to trust, and thus is consistent with the development of intellectual capital management processes within the organization. (shrink)
Information modeling (also known as conceptual modeling or semantic data modeling) may be characterized as the formulation of a model in which information aspects of objective and subjective reality are presented (the application), independent of datasets and processes by which they may be realized (the system).A methodology for information modeling should incorporate a number of concepts which have appeared in the literature, but should also be formulated in terms of constructs which are understandable to and expressible (...) by the system user as well as the system developer. This is particularly desirable in connection with certain intimate relationships, such as being the same as or being a part of. (shrink)
The distinction between the modeling of information and the modeling of data in the creation of automated systems has historically been important because the development tools available to programmers have been wedded to machine oriented data types and processes. However, advances in software engineering, particularly the move toward data abstraction in software design, allow activities reasonably described as information modeling to be performed in the software creation process. An examination of the evolution of programming languages and development (...) of general programming paradigms, including object-oriented design and implementation, suggests that while data modeling will necessarily continue to be a programmer's concern, more and more of the programming process itself is coming to be characterized by information modeling activities. (shrink)
This paper aims at integrating the work onanalogical reasoning in Cognitive Science into thelong trend of philosophical interest, in this century,in analogical reasoning as a basis for scientificmodeling. In the first part of the paper, threesimulations of analogical reasoning, proposed incognitive science, are presented: Gentner''s StructureMatching Engine, Mitchel''s and Hofstadter''s COPYCATand the Analogical Constraint Mapping Engine, proposedby Holyoak and Thagard. The differences andcontroversial points in these simulations arehighlighted in order to make explicit theirpresuppositions concerning the nature of analogicalreasoning. In the (...) last part, this debate in cognitivescience is applied to some traditional philosophicalaccounts of formal and material analogies as a basisfor scientific modeling, like Mary Hesse`s, and tomore recent ones, that already draw from the work inArtificial Intelligence, like that proposed byAronson, Harré and Way. (shrink)
Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.
Fundamental assumptions behind qualitative modelling are critically considered and some inherent problems in that modelling approach are outlined. The problems outlined are due to the assumption that a sufficient set of symbols representing the fundamental features of the physical world exists. That assumption causes serious problems when modelling continuous systems. An alternative for intelligent system building for cases not suitable for qualitative modelling is proposed. The proposed alternative combines neural networks and quantitative modelling.