The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on (...) the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. (shrink)
The distinction between data and phenomena introduced by Bogen and Woodward (Philosophical Review 97(3):303–352, 1988) was meant to help accounting for scientific practice, especially in relation with scientific theory testing. Their article and the subsequent discussion is primarily viewed as internal to philosophy of science. We shall argue that the data/phenomena distinction can be used much more broadly in modelling processes in philosophy.
It is argued that complexity is not attributable directly to systems or processes but rather to the descriptions of their `best' models, to reflect their difficulty. Thus it is relative to the modelling language and type of difficulty. This approach to complexity is situated in a model of modelling. Such an approach makes sense of a number of aspects of scientific modelling: complexity is not situated between order and disorder; noise can be explicated by approaches to excess (...)modelling error; and simplicity is not truth indicative but a useful heuristic when models are produced by a being with a tendency to elaborate in the face of error. (shrink)
Scientists confronted with multiple explanatory hypotheses as a result of their abductive inferences, generally want to reason further on the different hypotheses one by one. This paper presents a modal adaptive logic MLA s that enables us to model abduction in such a way that the different explanatory hypotheses can be derived individually. This modelling is illustrated with a case study on the different hypotheses on the origin of the Moon.
Fundamental assumptions behind qualitative modelling are critically considered and some inherent problems in that modelling approach are outlined. The problems outlined are due to the assumption that a sufficient set of symbols representing the fundamental features of the physical world exists. That assumption causes serious problems when modelling continuous systems. An alternative for intelligent system building for cases not suitable for qualitative modelling is proposed. The proposed alternative combines neural networks and quantitative modelling.
The strategies of action employed by a human subject in order to perceive simple 2-D forms on the basis of tactile sensory feedback have been modelled by an explicit computer algorithm. The modelling process has been constrained and informed by the capacity of human subjects both to consciously describe their own strategies, and to apply explicit strategies; thus, the strategies effectively employed by the human subject have been influenced by the modelling process itself. On this basis, good qualitative (...) and semi-quantitative agreement has been achieved between the trajectories produced by a human subject, and the traces produced by a computer algorithm. The advantage of this reciprocal modelling option, besides facilitating agreement between the algorithm and the empirically observed trajectories, is that the theoretical model provides an explanation, and not just a description, of the active perception of the human subject. (shrink)
Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling (...) the hierarchical nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way. (shrink)
Like other sciences, biosemiotics also has its time-honoured archive, consisting of writings by those who have been invented and revered as ancestors of the discipline. One such example is Jakob von Uexküll. As to the people who ‘invented’ him, they are either, to paraphrase a French cliché, ‘agents du cosmopolitisme sémiotique’ like Thomas Sebeok, or de jure and de facto progenitor like Thure von Uexküll. In the archive is the special issue of Semiotica 42. 1 (1982) edited by the late (...) Sebeok and introduced by Thure von Uexküll. It is in the opening essay that Thure von UexküIl tries to restore Jakob von Uexküll’s role as a precursor of semiotics by negotiating the Elder with Saussure and the linguistics-oriented ‘semiology’ in his wake. However, semiotic mapping, in the strictly ‘disciplinary’ sense, of Jakob von Uexküll is no easy task because he ‘knew neither Peirce nor Saussure and did not use their terminology’ (Thure von Uexküll 1982,2). Because Thure prefers to call the Elder’s science ‘general semiotics’ (Thure von Uexküll 1982), this paper begins by assessing Thure von Uexküll’s semiotic configuration of Jakob, probe into the force and limits of the linguistic analogy, revisit the already time-honoured debate on the primary and secondary modelling systems, which was made famous by the Moscow-Tartu semioticians in the early 1970s, but severely criticized by Sebeok and his followers. The paper engages Sebeok from several fronts, directed first at his relegation of the Saussurian linguistic model, then at his critique of the Primary Modelling System, and finally at his reservation about evolutionism in light of the current debate on gene/meme co-evolution. (shrink)
The issue on the role of users in knowledge-based systems can be investigated from two aspects: the design aspect and the functionality aspect. Participatory design is an important approach for the first aspect while system adaptability supported by user modelling is crucial to the second aspect. In the article, we discuss the second aspect. We view a knowledge-based computer system as the partner of users' problem-solving process, and we argue that the system functionality can be enhanced by adapting the (...) behaviour of the system to fit the needs of users with different profiles. We emphasise that the notion of user modelling is crucial to realise such kind of flexibility. User modelling will be beneficial to the user, not only through adaptive interfaces, but also through the enhanced system adaptability. In a knowledge-based system, by incorporating user models, searching can be reduced to a smaller portion in the knowledge-base, thus enhancing system functionality. In other words, user modelling is incorporated to realise flexible inference control to achieve system adaptability. An example is provided, and a general conceptual model is sketched. We conclude this paper by emphasising that the design aspect and functionality aspect are complementary. Achieving enhanced functionality through joint efforts of computers and human users indicates a kind of participatory execution of computerised problem-solving or participatory problem-solving. (shrink)
Experimental data on social preferences present a number of features that need to be incorporated in econometric modelling. We explore a variety of econometric modelling approaches to the analysis of such data. The approaches under consideration are: the Random Utility approach (in which it is assumed that each possible action yields a utility with a deterministic and a stochastic component, and that the individual selects the action yielding the highest utility); the Random Behavioural approach (which assumes that the (...) individual computes the maximum of a deterministic utility function, and that computational error causes their observed behaviour to depart stochastically from this optimum); and the Random Preference approach (in which all variation in behaviour is attributed to stochastic variation in the parameters of the deterministic component of utility). These approaches are applied in various ways to an experiment on fairness conducted by Cappelen et al. (Am Econ Rev 97(3):818–827, 2007). Various models that we estimate succeed in capturing the key features of the dataset. Conclusions concerning fairness-related behaviour depend crucially on the choice of econometric model. (shrink)
This document discusses the status of research on detection and prevention of financial fraud undertaken as part of the IST European Commission funded FF POIROT (Financial Fraud Prevention Oriented Information Resources Using Ontology Technology) project. A first task has been the specification of the user requirements that define the functionality of the financial fraud ontology to be designed by the FF POIROT partners. It is claimed here that modeling fraudulent activity involves a mixture of law and facts as well as (...) inferences about facts present, facts presumed or facts missing. The purpose of this paper is to explain this abstract model and to specify the set of user requirements. (shrink)
In this paper we describe in some detail a formal computer model of inferential discourse based on a belief system. The key issue is that a logical model in a computer, based on rational sets, can usefully model a human situation based on irrational sets. The background of this work is explained elsewhere, as is the issue of rational and irrational sets (Billinge and Addis, in: Magnani and Dossena (eds.), Computing, philosophy and cognition, 2004; Stepney et al., Journey: Non-classical philosophy—socially (...) sensitive computing in journeys non-classical computation: A grand challenge for computing research, 2004). The model is based on the Belief System (Addis and Gooding, Proceedings of the AISB’99 Symposium on Scientific Creativity, 1999) and it provides a mechanism for choosing queries based on a range of belief. We explain how it provides a way to update the belief based on query results, thus modelling others’ experience by inference. We also demonstrate that for the same internal experience, different models can be built for different actors. (shrink)
In this paper, we introduce the methodology and techniques of meta-argumentation to model argumentation. The methodology of meta-argumentation instantiates Dung’s abstract argumentation theory with an extended argumentation theory, and is thus based on a combination of the methodology of instantiating abstract arguments, and the methodology of extending Dung’s basic argumentation frameworks with other relations among abstract arguments. The technique of meta-argumentation applies Dung’s theory of abstract argumentation to itself, by instantiating Dung’s abstract arguments with meta-arguments using a technique called flattening. (...) We characterize the domain of instantiation using a representation technique based on soundness and completeness. Finally, we distinguish among various instantiations using the technique of specification languages. (shrink)
Normative theories suggest that inconsistencies be pointed out to the Decision Maker who is thus given the chance to modify his/her judgments. In this paper, we suggest that the inconsistencies problem be transferred from the Decision Maker to the Analyst. With the Mixture of Maximal Quasi Orders, rather than pointing out incoherences for the Decision Maker to change, these inconsistencies may be used as new source of information to model his/her preferences.
The descriptions and theoretical laws scientists write down when they model a system are often false of any real system. And yet we commonly talk as if there were objects that satisfy the scientists’ assumptions and as if we may learn about their properties. Many attempt to make sense of this by taking the scientists’ descriptions and theoretical laws to define abstract or fictional entities. In this paper, I propose an alternative account of theoretical modelling that draws upon Kendall (...) Walton’s ‘make-believe’ theory of representation in art. I argue that this account allows us to understand theoretical modelling without positing any object of which scientists’ modelling assumptions are true. (shrink)
The nature of complex concepts has important implications for the computational modelling of the mind, as well as for the cognitive science of concepts. This paper outlines the way in which RVC â a Relational View of Concepts â accommodates a range of complex concepts, cases which have been argued to be non-compositional. RVC attempts to integrate a number of psychological, linguistic and psycholinguistic considerations with the situation-theoretic view that information-carrying relations hold only relative to background situations. The central (...) tenet of RVC is that the content of concepts varies systematically with perspective. The analysis of complex concepts indicates that compositionality too should be considered to be sensitive to perspective. Such a view accords with concepts and mental states being situated and the implications for theories of concepts and for computational models of the mind are discussed. (shrink)
This paper examines creative strategies employed inscientific modelling. It is argued that being creativepresents not a discrete event, but rather an ongoingeffort consisting of many individual `creative acts''.These take place over extended periods of time andcan be carried out by different people, working ondifferent aspects of the same project. The example ofextended extragalactic radio sources shows that, inorder to model a complicated phenomenon in itsentirety, the modelling task is split up into smallerproblems that result in several sub-models. This (...) is away of using cognitive resources efficiently and in away which overcomes their limitations. Another aspectof modelling that requires creativity is theemployment of visualisation in order to reassemble,i.e. recreate the unity of, the various sub-models bymeans of visualisation. This illustrates how thecreative effort required to deal with the complexityof the complicated phenomenon of radio sources ischannelled in order to use cognitive resourcesefficiently and to stay within their capacity. (shrink)
Designing models of complex phenomena is a difficult task in engineering that can be tackled by composing a number of partial models to produce a global model of the phenomena. We propose to embed the partial models in software agents and to implement their composition as a cooperative negotiation between the agents. The resulting multiagent system provides a global model of a phenomenon. We applied this approach in modelling two complex physiological processes: the heart rate regulation and the glucose-insulin (...) metabolism. Beyond the effectiveness demonstrated in these two applications, the idea of using models associated to software agents to give reason of complex phenomena is in accordance with current tendencies in epistemology, where it is evident an increasing use of computational models for scientific explanation and analysis. Therefore, our approach has not only a practical, but also a theoretical significance: agents embedding models are a technology suitable both to representing and to investigating reality. (shrink)
Computer science advocates institutional frameworks as an effective tool for modelling policies and reasoning about their interplay. In practice, the rules or policies, of which the institutional framework consists, are often specified using a formal language, which allows for the full verification and validation of the framework (e.g. the consistency of policies) and the interplay between the policies and actors (e.g. violations). However, when modelling large-scale realistic systems, with numerous decision-making entities, scalability and complexity issues arise making it (...) possible only to verify certain portions of the problem without reducing the scale. In the social sciences, agent-based modelling is a popular tool for analysing how entities interact within a system and react to the system properties. Agent-based modelling allows the specification of complex decision-making entities and experimentation with large numbers of different parameter sets for these entities in order to explore their effects on overall system performance. In this paper we describe how to achieve the best of both worlds, namely verification of a formal specification combined with the testing of large-scale systems with numerous different actor configurations. Hence, we offer an approach that allows for reasoning about policies, policy making and their consequences on a more comprehensive level than has been possible to date. We present the institutional agent-based model methodology to combine institutional frameworks with agent-based simulations). We furthermore present J-InstAL, a prototypical implementation of this methodology using the InstAL institutional framework whose specifications can be translated into a computational model under the answer set semantics, and an agent-based simulation based on the jason tool. Using a simplified contract enforcement example, we demonstrate the functionalities of this prototype and show how it can help to assess an appropriate fine level in case of contract violations. (shrink)
In Making Sense of Life , Keller emphasizes several differences between biology and physics. Her analysis focuses on significant ways in which modelling practices in some areas of biology, especially developmental biology, differ from those of the physical sciences. She suggests that natural models and modelling by homology play a central role in the former but not the latter. In this paper, I focus instead on those practices that are importantly similar, from the point of view of epistemology (...) and cognitive science. I argue that concrete and abstract models are significant in both disciplines, that there are shared selection criteria for models in physics and biology, e.g. familiarity, and that modelling often occurs in a similar fashion. (shrink)
Experimental activity is traditionally identified with testing the empirical implications or numerical simulations of models against data. In critical reaction to the ‘tribunal view’ on experiments, this essay will show the constructive contribution of experimental activity to the processes of modeling and simulating. Based on the analysis of a case in fluid mechanics, it will focus specifically on two aspects. The first is the controversial specification of the conditions in which the data are to be obtained. The second is conceptual (...) clarification, with a redefinition of concepts central to the understanding of the phenomenon and the conditions of its occurrence. (shrink)
Many biological investigations are organized around a small group of species, often referred to as “model organisms”, such as the fruit fly Drosophila melanogaster. The terms “model” and “modeling” also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka-Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different epistemic characters. (...) Theoretical modeling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. (shrink)
Levins and Lewontin have contributed significantly to our philosophical understanding of the structures, processes, and purposes of biological mathematical theorizing and modeling. Here I explore their separate and joint pleas to avoid making abstract and ideal scientific models ontologically independent by confusing or conflating our scientific models and the world. I differentiate two views of theorizing and modeling, orthodox and dialectical, in order to examine Levins and Lewontin’s, among others, advocacy of the latter view. I compare the positions of these (...) two views with respect to four points regarding ontological assumptions: (1) the origin of ontological assumptions, (2) the relation of such assumptions to the formal models of the same theory, (3) their use in integrating and negotiating different formal models of distinct theories, and (4) their employment in explanatory activity. Dialectical is here used in both its Hegelian–Marxist sense of opposition and tension between alternative positions and in its Platonic sense of dialogue between advocates of distinct theories. I investigate three case studies, from Levins and Lewontin as well as from a recent paper of mine, that show the relevance and power of the dialectical understanding of theorizing and modeling. (shrink)
This article briefly review the fundamentals of structural equation modeling for readers unfamiliar with the technique then goes on to offer a review of the Martin and Cullen paper. In summary, a number of fit indices reported by the authors reveal that the data do not fit their theoretical model and thus the conclusion of the authors that the model was “promising” are unwarranted.
The emphasis on models hasn’t completely eliminated laws from scientific discourse and philosophical discussion. Instead, I want to argue that much of physics lies beyond the strict domain of laws. I shall argue that in important cases the physics, or physical understanding, does not lie either in laws or in their properties, such as universality, consistency and symmetry. I shall argue that the domain of application commonly attributed to laws is too narrow. That is, laws can still play an important, (...) though peculiar, role outside their strict domain of validity. I shall argue also that, by way of a trade-off, while the actual domain of application of laws should be seen as much broader. At the same time, what I call ‘anomic’ representational elements reveal themselves as central to the descriptive and explanatory power of theories and model: boundary conditions, state descriptions, structures, constraints, limits and mechanisms. I conclude with a brief consideration of how my discussion has consequences for discussion of understanding, unification, approximation and dispositional properties. I focus on examples from physics, macroscopic and microscopic, phenomenological and fundametal: shock waves, propagation of cracks, symmetry breaking, and others. This law-eccentric kind of knowledge is central to both modeling the world and intervening in it. (shrink)
The article first addresses the importance of cognitive modeling, in terms of its value to cognitive science (as well as other social and behavioral sciences). In particular, it emphasizes the use of cognitive architectures in this undertaking. Based on this approach, the article addresses, in detail, the idea of a multi-level approach that ranges from social to neural levels. In physical sciences, a rigorous set of theories is a hierarchy of descriptions/explanations, in which causal relationships among entities at a high (...) level can be reduced to causal relationships among simpler entities at a more detailed level. We argue that a similar hierarchy makes possible an equally productive approach toward cognitive modeling. The levels of models that we conceive in relation to cognition include, at the highest level, sociological/anthropological models of collective human behavior, behavioral models of individual performance, cognitive models involving detailed mechanisms, representations, and processes, as well as biological/physiological models of neural circuits, brain regions, and other detailed biological processes. (shrink)
To explore the relation between mathematical models and reality, four different domains of reality are distinguished: observer-independent reality (to which there is no direct access), personal reality, social reality and mathematical/formal reality. The concepts of personal and social reality are strongly inspired by constructivist ideas. Mathematical reality is social as well, but constructed as an autonomous system in order to make absolute agreement possible. The essential problem of mathematical modelling is that within mathematics there is agreement about ‘truth’, but (...) the assignment of mathematics to informal reality is not itself formally analysable, and it is dependent on social and personal construction processes. On these levels, absolute agreement cannot be expected. Starting from this point of view, repercussion of mathematical on social and personal reality, the historical development of mathematical modelling, and the role, use and interpretation of mathematical models in scientific practice are discussed. (shrink)
If chemistry is to be taught successfully, teachers must have a good subject matter knowledge (SK) of the ideas with which they are dealing, the nature of this falling within the orbit of philosophy of chemistry. They must also have a good pedagogic content knowledge (PCK), the ability to communicate SK to students, the nature of this falling within the philosophy and psychology of chemical education. Taking the case of models and modelling, important themes in the philosophy of chemistry, (...) an interview-based study was conducted into the SK and PCK of a sample of teachers in Brazil. This paper focuses on the results of the university chemistry teacher sub-sample in that enquiry, analyses their SK and PCK, and speculates on the implications of this for the education of school teachers. Finally, it suggests approaches to the professional development of university chemistry teachers that place an emphasis on the philosophy of chemistry. (shrink)
This paper aims at integrating the work onanalogical reasoning in Cognitive Science into thelong trend of philosophical interest, in this century,in analogical reasoning as a basis for scientificmodeling. In the first part of the paper, threesimulations of analogical reasoning, proposed incognitive science, are presented: Gentner''s StructureMatching Engine, Mitchel''s and Hofstadter''s COPYCATand the Analogical Constraint Mapping Engine, proposedby Holyoak and Thagard. The differences andcontroversial points in these simulations arehighlighted in order to make explicit theirpresuppositions concerning the nature of analogicalreasoning. In the (...) last part, this debate in cognitivescience is applied to some traditional philosophicalaccounts of formal and material analogies as a basisfor scientific modeling, like Mary Hesse`s, and tomore recent ones, that already draw from the work inArtificial Intelligence, like that proposed byAronson, Harré and Way. (shrink)
Aristotle saw ethics as a habit that is modeled and developed though practice. Shelly's Victor Frankenstein, though well intentioned in his goals, failed to model ethical behavior for his creation, abandoning it to its own recourse. Today we live in an era of unfettered mergers and acquisitions where once separate and independent media increasingly are concentrated under the control and leadership of the fictitious but legal personhood of a few conglomerated corporations. This paper will explore the impact of mega-media mergers (...) on ethical modeling in journalism. It will diagram the behavioral context underlying the development of ethical habits, discuss leadership theory as it applies to management, and address the question of whether the creation of mega-media conglomerates will result in responsible corporate citizens or monsters who turn on their creators. (shrink)
Accounts of the relation between theories and models in biology concentrate on mathematical models. In this paper I consider the dual role of models as representations of natural systems and as a material basis for theorizing. In order to explicate the dual role, I develop the concept of a remnant model, a material entity made from parts of the natural system(s) under study. I present a case study of an important but neglected naturalist, Joseph Grinnell, to illustrate the extent to (...) which mundane practices in a museum setting constitute theorizing. I speculate that historical and sociological analyses of institutions can play a specific role in the philosophical analysis of model-building strategies. (shrink)
We analyze different aspects of our quantum modeling approach of human concepts and, more specifically, focus on the quantum effects of contextuality, interference, entanglement, and emergence, illustrating how each of them makes its appearance in specific situations of the dynamics of human concepts and their combinations. We point out the relation of our approach, which is based on an ontology of a concept as an entity in a state changing under influence of a context, with the main traditional concept theories, (...) that is, prototype theory, exemplar theory, and theory theory. We ponder about the question why quantum theory performs so well in its modeling of human concepts, and we shed light on this question by analyzing the role of complex amplitudes, showing how they allow to describe interference in the statistics of measurement outcomes, while in the traditional theories statistics of outcomes originates in classical probability weights, without the possibility of interference. The relevance of complex numbers, the appearance of entanglement, and the role of Fock space in explaining contextual emergence, all as unique features of the quantum modeling, are explicitly revealed in this article by analyzing human concepts and their dynamics. (shrink)
Information modeling (also known as conceptual modeling or semantic data modeling) may be characterized as the formulation of a model in which information aspects of objective and subjective reality are presented (the application), independent of datasets and processes by which they may be realized (the system).A methodology for information modeling should incorporate a number of concepts which have appeared in the literature, but should also be formulated in terms of constructs which are understandable to and expressible by the system user (...) as well as the system developer. This is particularly desirable in connection with certain intimate relationships, such as being the same as or being a part of. (shrink)
Syntactic and structural models specify relationships between their constituents but cannot show what outcomes their interaction would produce over time in the world. Simulation consists in iterating the states of a model, so as to produce behaviour over a period of simulated time. Iteration enables us to trace the implications and outcomes of inference rules and other assumptions implemented in the models that make up a theory. We apply this method to experiments which we treat as models of the particular (...) aspects of reality they are designed to investigate. Scientific experiments are constantly designed and re-designed in the context of implementation and use. They mediate between theoretical understanding and the practicalities of engaging with the empirical and social world. In order to model experiments we need to identify and represent features that all experiments have in common. We treat these features as parameters of a general model of experiment so that by varying these parameters different types of experiment can be modelled. (shrink)
This article applies the concept of prudence to develop the characteristics of responsible risk-modeling practices in the insurance industry. A critical evaluation of the risk-modeling process suggests that ethical judgments are emergent rather than static, vague rather than clear, particular rather than universal, and still defensible according to the discipline’s established theory, which will support a range of judgments. Thus, positive moral guides for responsible behavior are of limited practical value. Instead, by being prudent, modelers can improve their ability to (...) deal with the ethical and technical complexity of the risk-modeling process. While the application of prudence to resolve ethical challenges in risk modeling, an issue of practical importance to managers, is a first in the literature, the practice of applying an ethical lens to issues of pragmatic importance for managers is well established in Maak and Pless (J Bus Ethics 66:99–115, 2006a ; Responsible leadership, 2006b ) among others. (shrink)
This paper describes the processes of cognitive modeling and representation of human expertise for developing an ontology and knowledge base of an expert system. An ontology is an organization and classification of knowledge. Ontological engineering in artificial intelligence (AI) has the practical goal of constructing frameworks for knowledge that allow computational systems to tackle knowledge-intensive problems and supports knowledge sharing and reuse. Ontological engineering is also a process that facilitates construction of the knowledge base of an intelligent system, which can (...) be defined as a computer program that can duplicate problem-solving capabilities of human experts in specific areas. This paper presents the processes of knowledge acquisition, analysis, and representation, which laid the basis for ontology construction. In this case, the processes are applied in ontological engineering for construction of an expert system in the domain of monitoring of a petroleum production and separation facility. The acquired knowledge was also formally represented in two knowledge acquisition tools. (shrink)
The distinction between the modeling of information and the modeling of data in the creation of automated systems has historically been important because the development tools available to programmers have been wedded to machine oriented data types and processes. However, advances in software engineering, particularly the move toward data abstraction in software design, allow activities reasonably described as information modeling to be performed in the software creation process. An examination of the evolution of programming languages and development of general programming (...) paradigms, including object-oriented design and implementation, suggests that while data modeling will necessarily continue to be a programmer's concern, more and more of the programming process itself is coming to be characterized by information modeling activities. (shrink)
Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.
Richard Levins has advocated the scientific merits of qualitative modeling throughout his career. He believed an excessive and uncritical focus on emulating the models used by physicists and maximizing quantitative precision was hindering biological theorizing in particular. Greater emphasis on qualitative properties of modeled systems would help counteract this tendency, and Levins subsequently developed one method of qualitative modeling, loop analysis, to study a wide variety of biological phenomena. Qualitative modeling has been criticized for being conceptually and methodologically problematic. As (...) a clear example of a qualitative modeling method, loop analysis shows this criticism is indefensible. The method has, however, some serious limitations. This paper describes loop analysis, its limitations, and attempts to clarify the differences between quantitative and qualitative modeling, in content and objective. Loop analysis is but one of numerous types of qualitative analysis, so its limitations do not detract from the currently underappreciated and underdeveloped role qualitative modeling could have within science. (shrink)
This paper describes the author’s development and use of a diagramming model in preparing a legal case for which he was responsible. He combined Wigmorean analysis and object oriented techniques in order to model arguments based on generalisations taken from the real world and from legal precedent. The paper addresses the modelling issues, but in particular identifies the very real benefits that affected the way the case was conducted. Those areas in which the model came into its own were (...) principally the structuring of evidence, the preparation for the cross-examination of witnesses, and ensuring a consistent approach from picking up the case to making the closing submissions. (shrink)
Despite efforts from regulatory agencies (e.g. NIH, FDA), recent systematic reviews of randomised controlled trials (RCTs) show that top medical journals continue to publish trials without requiring authors to report details for readers to evaluate early stopping decisions carefully. This article presents a systematic way of modelling and simulating interim monitoring decisions of RCTs. By taking an approach that is both general and rigorous, the proposed framework models and evaluates early stopping decisions of RCTs based on a clear and (...) consistent set of criteria. The framework allows decision analysts to generate and quickly answer ‘what-if’ questions by simulating alternate trial scenarios. I illustrate the framework with a case study of an RCT that was stopped early due to harm. This was a trial of vitamin A supplement in relation to HIV transmission from mother-to-child through breastfeeding. (shrink)
This paper tries to express a critical point of view on the computational turn in philosophy by looking at a specific field of study: philosophy of science. The paper starts by briefly discussing the main contributions that information and communication technologies have given to the rising of computational philosophy of science, and in particular to the cognitive modelling approach. The main question then arises, concerning how computational models can cope with the presence of tacit knowledge in science. Would it (...) be possible to develop new ways of handling this specific type of knowledge, in order to incorporate it in computational models of scientific thinking? Or should tacit knowledge lead us to other approaches in using computer sciences to model scientific cognition? These questions are addressed by making reference to a detailed case study of a recent innovation development in the field of biotechnology. (shrink)
Climate change presents us with a problem of intergenerational justice. While any costs associated with climate change mitigation measures will have to be borne by the world’s present generation, the main beneficiaries of mitigation measures will be future generations. This raises the question to what extent present generations have a responsibility to shoulder these costs. One influential approach for addressing this question is to appeal to neo-classical economic cost–benefit analyses and so-called economy-climate “integrated assessment models” to determine what course of (...) action a principle of intergenerational welfare maximization would require of us. I critically examine a range of problems for this approach. First, integrated assessment models face a problem of underdetermination and induction: They are very sensitive to a number of highly conjectural assumptions about economic responses to a temperature and climate regime, for which we have no empirical evidence. Second, they involve several simplifying assumptions which cannot be justified empirically. And third, some of the assumptions underlying the construction of economic models are intrinsically normative assumptions that reflect value judgments of the modeler. I conclude that, while integrated assessment models may play a useful role as “toy models,” their use as tools for policy optimization is highly problematic. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models (...) to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
In this paper, we try to shed light on the ontological puzzle pertaining to models and to contribute to a better understanding of what models are. Our suggestion is that models should be regarded as a specific kind of signs according to the sign theory put forward by Charles S. Peirce, and, more precisely, as icons, i.e. as signs which are characterized by a similarity relation between sign (model) and object (original). We argue for this (1) by analyzing from a (...) semiotic point of view the representational relation which is characteristic of models. We then corroborate our hypothesis (2) by discussing the conceptual differences between icons, i.e. models, and indexical and symbolic signs and (3) by putting forward a general classification of all icons into three functional subclasses (images, diagrams, and metaphors). Subsequently, we (4) integratively refine our results by resorting to two influential and, as can be shown, complementary philosophy of science approaches to models. This yields the following result: models are determined by a semiotic structure in which a subject intentionally uses an object, i.e. the model, as a sign for another object, i.e. the original, in the context of a chosen theory or language in order to attain a specific end by instituting a representational relation in which the syntactic structure of the model, i.e. its attributes and relations, represents by way of a mapping the properties of the original, which hence are regarded as similar in a relevant manner. (shrink)
In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing user interfaces. Methodological implications for knowledge engineering and for human-computer interaction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer interfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting with the computer.We show that in a first step (...) it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, user interfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computer interaction. (shrink)