The Representational Theory of Measurement conceives measurement as establishing homomorphisms from empirical relational structures into numerical relation structures, called models. There are two different approaches to deal with the justification of a model: an axiomatic and an empirical approach. The axiomatic approach verifies whether a given relational structure satisfies certain axioms to secure homomorphic mapping. The empirical approach conceives models to function as measuring instruments by transferring observations of a phenomenon under investigation into quantitative facts about that phenomenon. These facts (...) are evaluated by their accuracy and precision. Precision is generally achieved by least squares methods and accuracy by calibration. For calibration standards are needed. Then two polar strategies can be distinguished: white-box modeling and black-box modeling. The first strategy of modeling aims at estimating the invariant (structural) equations of the phenomenon, thereby fulfilling Hertz’s correctness requirement. The latter strategy of modeling is to use known stable facts about the phenomenon to adjust the model parameters, thereby fulfilling Hertz’s appropriateness requirement. For this latter strategy, the requirement of models as homomorphic mappings has been dropped. Where one will find the axiomatic approach more often used for measurement in the laboratory, the empirical approach is more appropriate for measurement outside the laboratory. The reason for this is that for measurement of phenomena outside the laboratory, one also needs to take account of the environment to achieve accurate results. Environments are generally too relation-rich for an axiomatic approach, which are only applicable for relation-poor systems (laboratories). The white-box modeling strategy, reflecting the complexity of the environment due to its correctness requirement, will, however, lead to immensely large models. To avoid this problem, modular design is an appropriate strategy to reduce this complexity. Modular design is a grey-box modeling strategy. Grey-box models are assemblies of modules; these are black boxes with standard interface. It should be noted that the structure of the assemblage need not be homomorphic to the relations describing the interaction between phenomenon and environment. These three modelingstrategies map out the possible designs for computer simulations as measuring instruments. Whether a simulation is based on a white-box, grey-box or black-box model is only determined by (the complexity of) the relationship between the phenomenon and its environment and not by e.g. its materiality or physicality. (shrink)
Both von Neumann and Wiener were outsiders to biology. Both were inspired by biology and both proposed models and generalizations that proved inspirational for biologists. Around the same time in the 1940s von Neumann developed the notion of self reproducing automata and Wiener suggested an explication of teleology using the notion of negative feedback. These efforts were similar in spirit. Both von Neumann and Wiener used mathematical ideas to attack foundational issues in biology, and the concepts they articulated had lasting (...) effect. But there were significant differences as well. Von Neumann presented a how-possibly model, which sparked interest by mathematicians and computer scientists, while Wiener collaborated more directly with biologists, and his proposal influenced the philosophy of biology. The two cases illustrate different strategies by which mathematicians, the “professional outsiders” of science, can choose to guide their engagement with biological questions and with the biological community, and illustrate different kinds of generalizations that mathematization can contribute to biology. The different strategies employed by von Neumann and Wiener and the types of models they constructed may have affected the fate of von Neumann’s and Wiener’s ideas – as well as the reputation, in biology, of von Neumann and Wiener themselves. (shrink)
In 1966, Richard Levins argued that there are different strategies in model building in population biology. In this paper, I reply to Orzack and Sober’s (1993) critiques of Levins, and argue that his views on modelingstrategies apply also in the context of evolutionary genetics. In particular, I argue that there are different ways in which models are used to ask and answer questions about the dynamics of evolutionary change, prospectively and retrospectively, in classical versus molecular evolutionary (...) genetics. Further, I argue that robustness analysis is a tool for, if not confirmation, then something near enough, in this discipline. (shrink)
This paper treats the computational modeling of size dependence in microstructure models of metals. Different gradient crystal plasticity strategies are analyzed and compared. For the numerical implementation, a dual-mixed finite element formulation which is suitable for parallelization is suggested. The paper ends with a representative numerical example for polycrystals.
This paper examines creative strategies employed inscientific modelling. It is argued that being creativepresents not a discrete event, but rather an ongoingeffort consisting of many individual `creative acts''.These take place over extended periods of time andcan be carried out by different people, working ondifferent aspects of the same project. The example ofextended extragalactic radio sources shows that, inorder to model a complicated phenomenon in itsentirety, the modelling task is split up into smallerproblems that result in several sub-models. This is (...) away of using cognitive resources efficiently and in away which overcomes their limitations. Another aspectof modelling that requires creativity is theemployment of visualisation in order to reassemble,i.e. recreate the unity of, the various sub-models bymeans of visualisation. This illustrates how thecreative effort required to deal with the complexityof the complicated phenomenon of radio sources ischannelled in order to use cognitive resourcesefficiently and to stay within their capacity. (shrink)
John Maynard Smith is the person most responsible for the use of game theory in evolutionary biology, having introduced and developed its major concepts, and later surveyed its uses. In this paper I look at some rhetorical work done by Maynard Smith and his co-author G.R. Price to make game theory a standard and common modelling tool for the evolutionary study of behavior. The original presentation of the ideas — in a 1973 Nature article — is frequently cited but almost (...) certainly rarely read. It took reformulation of the approach to create a usable model and an object of study. Perhaps paradoxically, the new model dealt with more abstract objects than did its predecessor, but because of that a better case could be made for its realism. The particular strategy of abstraction allowed game-theoretic modelling to gain a certain measure of autonomy from empirical problems, and thus to flourish. (shrink)
Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.
Two controversies exist regarding the appropriate characterization of hierarchical and adaptive evolution in natural populations. In biology, there is the Wright–Fisher controversy over the relative roles of random genetic drift, natural selection, population structure, and interdemic selection in adaptive evolution begun by Sewall Wright and Ronald Aylmer Fisher. There is also the Units of Selection debate, spanning both the biological and the philosophical literature and including the impassioned group-selection debate. Why do these two discourses exist separately, and interact relatively little? (...) We postulate that the reason for this schism can be found in the differing focus of each controversy, a deep difference itself determined by distinct general styles of scientific research guiding each discourse. That is, the Wright–Fisher debate focuses on adaptive process, and tends to be instructed by the mathematical modeling style, while the focus of the Units of Selection controversy is adaptive product, and is typically guided by the function style. The differences between the two discourses can be usefully tracked by examining their interpretations of two contested strategies for theorizing hierarchical selection: horizontal and vertical averaging. (shrink)
Rather than taking the ontological fundamentality of an ideal microphysics as a starting point, this article sketches an approach to the problem of levels that swaps assumptions about ontology for assumptions about inquiry. These assumptions can be implemented formally via computational modeling techniques that will be described below. It is argued that these models offer a way to save some of our prominent commonsense intuitions concerning levels. This strategy offers a way of exploring the individuation of higher level properties (...) in a systematic and formally constrained manner. †To contact the author, please write to: Department of Philosophy, Worrell Hall 306, 500 University Avenue, University of Texas, El Paso, TX 79968; e‐mail: email@example.com. (shrink)
Interest in the computational aspects of modeling has been steadily growing in philosophy of science. This paper aims to advance the discussion by articulating the way in which modeling and computational errors are related and by explaining the significance of error management strategies for the rational reconstruction of scientific practice. To this end, we first characterize the role and nature of modeling error in relation to a recipe for model construction known as Euler’s recipe. We then (...) describe a general model that allows us to assess the quality of numerical solutions in terms of measures of computational errors that are completely interpretable in terms of modeling error. Finally, we emphasize that this type of error analysis involves forms of perturbation analysis that go beyond the basic model-theoretical and statistical/probabilistic tools typically used to characterize the scientific method; this demands that we revise and complement our reconstructive toolbox in a way that can affect our normative image of science. (shrink)
Effective use of coping strategies by people with chronic pain conditions is associated with better functioning and adjustment to chronic disease. Although the effects of coping on pain have been well studied, less is known about how specific coping strategies relate to actual physical activity patterns in daily life. The purpose of this study was to evaluate how different coping strategies relate to symptoms and physical activity patterns in a sample of adults with knee and hip osteoarthritis (...) (N = 44). Physical activity was assessed by wrist-worn accelerometry; coping strategy use was assessed by the Chronic Pain Coping Inventory. We hypothesized that the use of coping strategies that reflect approach behaviors (e.g., Task Persistence), would be associated with higher average levels of physical activity, whereas avoidance coping behaviors (e.g., Resting, Asking for Assistance, Guarding) and Pacing would be associated with lower average levels of physical activity. We also evaluated whether coping strategies moderated the association between momentary symptoms (pain and fatigue) and activity. We hypothesized that higher levels of approach coping would be associated with a weaker association between symptoms and activity compared to lower levels of this type of coping. Multilevel modeling was used to analyze the momentary association between coping and physical activity. We found that higher body mass index, fatigue, and the use of Guarding were significantly related to lower activity levels, whereas Asking for Assistance was significantly related to higher activity levels. Only Resting moderated the association between pain and activity. Guarding, Resting, Task Persistence, and Pacing moderated the association between fatigue and activity. This study provides an initial understanding of how people with osteoarthritis cope with symptoms as they engage in daily life activities using ecological momentary assessment and objective physical activity measurement. (shrink)
Loop analysis is a method of qualitative modeling anticipated by Sewall Wright and systematically developed by Richard Levins. In Levins’ (1966) distinctions between modelingstrategies, loop analysis sacrifices precision for generality and realism. Besides criticizing the clarity of these distinctions, Orzack and Sober (1993) argued qualitative modeling is conceptually and methodologically problematic. Loop analysis of the stability of ecological communities shows this criticism is unjustified. It presupposes an overly narrow view of qualitative modeling and underestimates (...) the broad role models play in scientific research, especially in helping scientists represent and understand complex systems. (shrink)
El artículo muestra una estrategia metodológica para garantizar la dirección del proceso educativo para el desarrollo de la habilidad intelectual modelación en los estudiantes; devela sus principales fundamentos epistémicos, sus etapas y acciones esenciales. Para ello se aplicaron métodos científicos de investigación. La constatación de los resultados brinda evidencias positivas acerca de su pertinencia, al considerar el carácter coparticipativo y coprotagónico que adquieren las influencias educativas en el contexto institucional en la dirección de un proceso educativo único. The article shows (...) a methodological strategy to ensure the teaching process direction in order develop intellectual skill modeling in students; It reveals its main epistemic foundations, its stages and essential actions. That is the reason for which different research scientific methods were applied. Verification of results provides positive evidence of its pertinence when considering the collaborative nature of the educational influences within the institutional context in the direction of a unique educational process. (shrink)
This paper contrasts and compares strategies of model-building in condensed matter physics and biology, with respect to their alleged unequal susceptibility to trade-offs between different theoretical desiderata. It challenges the view, often expressed in the philosophical literature on trade-offs in population biology, that the existence of systematic trade-offs is a feature that is specific to biological models, since unlike physics, biology studies evolved systems that exhibit considerable natural variability. By contrast, I argue that the development of ever more sophisticated (...) experimental, theoretical, and computational methods in physics is beginning to erode this contrast, since condensed matter physics is now in a position to measure, describe, model, and manipulate sample-specific features of individual systems – for example at the mesoscopic level – in a way that accounts for their contingency and heterogeneity. Model-building in certain areas of physics thus turns out to be more akin to modeling in biology than has been supposed and, indeed, has traditionally been the case. (shrink)
In this paper I argue that the appropriate analogy for “understanding what makes simulation results reliable” in Global Climate Modeling is not with scientific experimentation or measurement, but—at least in the case of the use of global climate models for policy development—with the applications of science in engineering design problems. The prospects for using this analogy to argue for the quantitative reliability of GCMs are assessed and compared with other potential strategies.
Biologists and economists use models to study complex systems. This similarity between these disciplines has led to an interesting development: the borrowing of various components of model-based theorizing between the two domains. A major recent example of this strategy is economists’ utilization of the resources of evolutionary biology in order to construct models of economic systems. This general strategy has come to be called evolutionary economics and has been a source of much debate among economists. Although philosophers have developed literatures (...) on the nature of models and modeling, the unique issues surrounding this kind of interdisciplinary model building have yet to be independently investigated. In this paper, we utilize evolutionary economics as a case study in the investigation of more general issues concerning interdisciplinary modeling. We begin by critiquing the distinctions currently used within the evolutionary economics literature and propose an alternative carving of the conceptual terrain. We then argue that the three types of evolutionary economics we distinguish capture distinctions that will be important whenever resources of model-based theorizing are borrowed across distinct scientific domains. Our analysis of these model-building strategies identifies several of the unique methodological and philosophical issues that confront interdisciplinary modeling. (shrink)
Are there relationships between consciousness and the material world? Empirical evidence for such a connection was reported in several meta-analyses of mind-matter experiments designed to address this question. In this paper we consider such meta-analyses from a statistical modeling perspective, emphasizing strategies to validate the models and the associated statistical procedures. In particular, we explicitly model increased data variability and selection mechanisms, which permits us to estimate 'selection profiles ' and to reassess the experimental effect in view of (...) potential other effects. An application to the data pool considered in the influential meta-analysis of Radin and Nelson (1989) yields indications for the presence of random and selection effects Adjustment for possible selection is found to render the,without such an adjustment significant, experimental effect non-significant. Somewhat different conclusions apply to a subset of the data deserving separate consideration. The actual origin of the data features that are described as experimental, random, or selection effects within the proposed model cannot be clarified y our approach and remains open. (shrink)
The processes of wound healing and bone regeneration and problems in tissue engineering have been an active area for mathematical modeling in the last decade. Here we review a selection of recent models which aim at deriving strategies for improved healing. In wound healing, the models have particularly focused on the inflammatory response in order to improve the healing of chronic wound. For bone regeneration, the mathematical models have been applied to design optimal and new treatment strategies (...) for normal and specific cases of impaired fracture healing. For the field of tissue engineering, we focus on mathematical models that analyze the interplay between cells and their biochemical cues within the scaffold to ensure optimal nutrient transport and maximal tissue production. Finally, we briefly comment on numerical issues arising from simulations of these mathematical models. (shrink)
Commentary on our target article centers around six main topics: (1) strategies in modeling the neurobehavioral foundation of human behavioral traits; (2) clarification of the construct of affiliation; (3) developmental aspects of affiliative bonding; (4) modeling disorders of affiliative reward; (5) serotonin and affiliative behavior; and (6) neural considerations. After an initial important research update in section R1, our Response is organized around these topics in the following six sections, R2 to R7.
The Lotka–Volterra predator-prey-model is a widely known example of model-based science. Here we reexamine Vito Volterra’s and Umberto D’Ancona’s original publications on the model, and in particular their methodological reflections. On this basis we develop several ideas pertaining to the philosophical debate on the scientific practice of modeling. First, we show that Volterra and D’Ancona chose modeling because the problem in hand could not be approached by more direct methods such as causal inference. This suggests a philosophically insightful (...) motivation for choosing the strategy of modeling. Second, we show that the development of the model follows a trajectory from a “how possibly” to a “how actually” model. We discuss how and to what extent Volterra and D’Ancona were able to advance their model along that trajectory. It turns out they were unable to establish that their model was fully applicable to any system. Third, we consider another instance of model-based science: Darwin’s model of the origin and distribution of coral atolls in the Pacific Ocean. Darwin argued more successfully that his model faithfully represents the causal structure of the target system, and hence that it is a “how actually” model. (shrink)
Experimental activity is traditionally identified with testing the empirical implications or numerical simulations of models against data. In critical reaction to the ‘tribunal view’ on experiments, this essay will show the constructive contribution of experimental activity to the processes of modeling and simulating. Based on the analysis of a case in fluid mechanics, it will focus specifically on two aspects. The first is the controversial specification of the conditions in which the data are to be obtained. The second is (...) conceptual clarification, with a redefinition of concepts central to the understanding of the phenomenon and the conditions of its occurrence. (shrink)
This article briefly review the fundamentals of structural equation modeling for readers unfamiliar with the technique then goes on to offer a review of the Martin and Cullen paper. In summary, a number of fit indices reported by the authors reveal that the data do not fit their theoretical model and thus the conclusion of the authors that the model was “promising” are unwarranted.
Explaining the complex dynamics exhibited in many biological mechanisms requires extending the recent philosophical treatment of mechanisms that emphasizes sequences of operations. To understand how nonsequentially organized mechanisms will behave, scientists often advance what we call dynamic mechanistic explanations. These begin with a decomposition of the mechanism into component parts and operations, using a variety of laboratory-based strategies. Crucially, the mechanism is then recomposed by means of computational models in which variables or terms in differential equations correspond to properties (...) of its parts and operations. We provide two illustrations drawn from research on circadian rhythms. Once biologists identified some of the components of the molecular mechanism thought to be responsible for circadian rhythms, computational models were used to determine whether the proposed mechanisms could generate sustained oscillations. Modeling has become even more important as researchers have recognized that the oscillations generated in individual neurons are synchronized within networks; we describe models being employed to assess how different possible network architectures could produce the observed synchronized activity. (shrink)
While corporate social responsibility (CSR) is becoming a mainstream issue for many organizations, most of the research to date addresses CSR in large businesses rather than in small- and medium-sized enterprises (SMEs), because it is too often considered a prerogative of large businesses only. The role of SMEs in an increasingly dynamic context is now being questioned, including what factors might affect their socially responsible behaviour. The goal of this paper is to make a comparison of SME and large firm (...) CSR strategies. Furthermore, size of the firm is analyzed as a factor that influences specific choices in the CSR field, and studied by means of a sample of 3,680 Italian firms. Based on a multi-stakeholder framework, the analysis provides evidence that large firms are more likely to identify relevant stakeholders and meet their requirements through specific and formal CSR strategies. (shrink)
In the present study, I sought to more fully understand stakeholder organizations’ strategies for influencing business firms. I conducted interviews with 28 representatives of four environmental non-governmental organizations (ENGOs): Natural Resources Defense Council (NRDC), Greenpeace, Environmental Defense (ED), and Union of Concerned Scientists (UCS). Qualitative methods were used to analyze this data, and additional data in the form of reviews of websites and other documents was conducted when provided by interviewees or needed to more fully comprehend interviewee’s comments. Six (...) propositions derived from Frooman (1999) formed the basis for the initial data analysis; all six propositions were supported to some extent. Perhaps more interestingly, the data revealed that Frooman’s model is too parsimonious to adequately describe stakeholder influence strategies and related alliances, necessitating the development of an alternative theoretical model grounded in the data collected. (shrink)
In my ‘Seven Sins of Pseudo-Science’ (Journal for General Philosophy of Science 1993) I argued against Grünbaum that Freud commits all Seven Sins of Pseudo-Science. Yet how does Freud manage to fool many people, including such a sophisticated person as Grünbaum? My answer is that Freud is a sophisticated pseudo-scientist, using all Seven Strategies of the Sophisticated Pseudo-Scientist to keep up appearances, to wit, (1) the Humble Empiricist, (2) the Severe Selfcriticism, (3) the Unbiased Me, (4) the Striking but (...) Irrelevant Example, (5) the Proof Given Elsewhere, (6) the Favorable Compromise, and (7) the Display of Methodological Sophistication. One should note that not all strategies are disreputable in themselves. But all are used very cunningly so as to hide weaknesses in Freud's arguments. To be fair, quite a few of his methodological remarks are sophisticated enough. As Freud combines these sophisticated remarks with an appalling methodology in practice, I call him a sophisticated pseudo-scientist. I do not claim that these rhetorical strategies are specific to him. (shrink)
The article first addresses the importance of cognitive modeling, in terms of its value to cognitive science (as well as other social and behavioral sciences). In particular, it emphasizes the use of cognitive architectures in this undertaking. Based on this approach, the article addresses, in detail, the idea of a multi-level approach that ranges from social to neural levels. In physical sciences, a rigorous set of theories is a hierarchy of descriptions/explanations, in which causal relationships among entities at a (...) high level can be reduced to causal relationships among simpler entities at a more detailed level. We argue that a similar hierarchy makes possible an equally productive approach toward cognitive modeling. The levels of models that we conceive in relation to cognition include, at the highest level, sociological/anthropological models of collective human behavior, behavioral models of individual performance, cognitive models involving detailed mechanisms, representations, and processes, as well as biological/physiological models of neural circuits, brain regions, and other detailed biological processes. (shrink)
Recent research on corporate social responsibility (CSR) suggests the need for further exploration into the relationship between small and medium-sized enterprises (SMEs) and CSR. SMEs rarely use the language of CSR to describe their activities, but informal CSR strategies play a large part in them. The goal of this article is to investigate whether differences exist between the formal and informal CSR strategies through which firms manage relations with and the claims of their stakeholders. In this context, formal (...) CSR strategies seem to characterize large firms while informal CSR strategies prevail among micro, small, and medium-sized enterprises. We use a sample of 3,626 Italian firms to investigate our research questions. Based on a multistakeholder framework, the analysis provides evidence that small businesses* use of CSR, involving strategies with an important impact on the bottom line, reflects an attempt to secure their license to operate in the communities; while large firms rarely make attempts to integrate their CSR strategies into explicit management systems. (shrink)
An immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence. By contrast, an epistemic defense mechanism is defined as a structural feature of a belief system which has the same effect of deflecting arguments and evidence. We discuss the remarkable recurrence of certain patterns of immunizing strategies and defense mechanisms in pseudoscience and other belief systems. Five (...) different types will be distinguished and analyzed, with examples drawn from widely different domains. The difference between immunizing strategies and defense mechanisms is analyzed, and their epistemological status is discussed. Our classification sheds new light on the various ways in which belief systems may achieve invulnerability against empirical evidence and rational criticism, and we propose our analysis as part of an explanation of these belief systems’ enduring appeal and tenacity. (shrink)
This paper aims at integrating the work onanalogical reasoning in Cognitive Science into thelong trend of philosophical interest, in this century,in analogical reasoning as a basis for scientificmodeling. In the first part of the paper, threesimulations of analogical reasoning, proposed incognitive science, are presented: Gentner''s StructureMatching Engine, Mitchel''s and Hofstadter''s COPYCATand the Analogical Constraint Mapping Engine, proposedby Holyoak and Thagard. The differences andcontroversial points in these simulations arehighlighted in order to make explicit theirpresuppositions concerning the nature of analogicalreasoning. In the (...) last part, this debate in cognitivescience is applied to some traditional philosophicalaccounts of formal and material analogies as a basisfor scientific modeling, like Mary Hesse`s, and tomore recent ones, that already draw from the work inArtificial Intelligence, like that proposed byAronson, Harré and Way. (shrink)
Aristotle saw ethics as a habit that is modeled and developed though practice. Shelly's Victor Frankenstein, though well intentioned in his goals, failed to model ethical behavior for his creation, abandoning it to its own recourse. Today we live in an era of unfettered mergers and acquisitions where once separate and independent media increasingly are concentrated under the control and leadership of the fictitious but legal personhood of a few conglomerated corporations. This paper will explore the impact of mega-media mergers (...) on ethical modeling in journalism. It will diagram the behavioral context underlying the development of ethical habits, discuss leadership theory as it applies to management, and address the question of whether the creation of mega-media conglomerates will result in responsible corporate citizens or monsters who turn on their creators. (shrink)
Perhaps due to the numerous community and company benefits associated with corporate volunteer programs, an increasing number of national and international firms are adopting such programs. A major issue in organizing corporate volunteer programs concerns the strategies that are most effective for recruiting employee participation. The results of this study suggest that the most effective strategies for initiating participation in volunteer programs may not be the same as the strategies that are most effective in terms of maximizing (...) the number of volunteer hours contributed by employees. More importantly, the results suggest that the most effective recruitment strategies depend on the age of the employee. The results were discussed in terms of matching the recruitment strategies with the characteristics of the potential volunteers and the nature of the volunteer project. (shrink)
We analyze different aspects of our quantum modeling approach of human concepts and, more specifically, focus on the quantum effects of contextuality, interference, entanglement, and emergence, illustrating how each of them makes its appearance in specific situations of the dynamics of human concepts and their combinations. We point out the relation of our approach, which is based on an ontology of a concept as an entity in a state changing under influence of a context, with the main traditional concept (...) theories, that is, prototype theory, exemplar theory, and theory theory. We ponder about the question why quantum theory performs so well in its modeling of human concepts, and we shed light on this question by analyzing the role of complex amplitudes, showing how they allow to describe interference in the statistics of measurement outcomes, while in the traditional theories statistics of outcomes originates in classical probability weights, without the possibility of interference. The relevance of complex numbers, the appearance of entanglement, and the role of Fock space in explaining contextual emergence, all as unique features of the quantum modeling, are explicitly revealed in this article by analyzing human concepts and their dynamics. (shrink)
Information modeling (also known as conceptual modeling or semantic data modeling) may be characterized as the formulation of a model in which information aspects of objective and subjective reality are presented (the application), independent of datasets and processes by which they may be realized (the system).A methodology for information modeling should incorporate a number of concepts which have appeared in the literature, but should also be formulated in terms of constructs which are understandable to and expressible (...) by the system user as well as the system developer. This is particularly desirable in connection with certain intimate relationships, such as being the same as or being a part of. (shrink)
This article applies the concept of prudence to develop the characteristics of responsible risk-modeling practices in the insurance industry. A critical evaluation of the risk-modeling process suggests that ethical judgments are emergent rather than static, vague rather than clear, particular rather than universal, and still defensible according to the discipline’s established theory, which will support a range of judgments. Thus, positive moral guides for responsible behavior are of limited practical value. Instead, by being prudent, modelers can improve their (...) ability to deal with the ethical and technical complexity of the risk-modeling process. While the application of prudence to resolve ethical challenges in risk modeling, an issue of practical importance to managers, is a first in the literature, the practice of applying an ethical lens to issues of pragmatic importance for managers is well established in Maak and Pless (J Bus Ethics 66:99–115, 2006a ; Responsible leadership, 2006b ) among others. (shrink)
This paper describes the processes of cognitive modeling and representation of human expertise for developing an ontology and knowledge base of an expert system. An ontology is an organization and classification of knowledge. Ontological engineering in artificial intelligence (AI) has the practical goal of constructing frameworks for knowledge that allow computational systems to tackle knowledge-intensive problems and supports knowledge sharing and reuse. Ontological engineering is also a process that facilitates construction of the knowledge base of an intelligent system, which (...) can be defined as a computer program that can duplicate problem-solving capabilities of human experts in specific areas. This paper presents the processes of knowledge acquisition, analysis, and representation, which laid the basis for ontology construction. In this case, the processes are applied in ontological engineering for construction of an expert system in the domain of monitoring of a petroleum production and separation facility. The acquired knowledge was also formally represented in two knowledge acquisition tools. (shrink)
The study describes a method created for the analysis of persuasive strategies, called rhetorical heuristics, which can be applied in speeches where the argument focuses primarily on questions of fact. First, the author explains how the concept emerged from the study of classical oratory. Then the theoretical background of rhetorical heuristics is outlined through briefly discussing relevant aspects of the psychology of decision-making. Finally, an exposition of how one could find these persuasive strategies introduces rhetorical heuristics in more (...) detail. (shrink)
The distinction between the modeling of information and the modeling of data in the creation of automated systems has historically been important because the development tools available to programmers have been wedded to machine oriented data types and processes. However, advances in software engineering, particularly the move toward data abstraction in software design, allow activities reasonably described as information modeling to be performed in the software creation process. An examination of the evolution of programming languages and development (...) of general programming paradigms, including object-oriented design and implementation, suggests that while data modeling will necessarily continue to be a programmer's concern, more and more of the programming process itself is coming to be characterized by information modeling activities. (shrink)
Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.
Richard Levins has advocated the scientific merits of qualitative modeling throughout his career. He believed an excessive and uncritical focus on emulating the models used by physicists and maximizing quantitative precision was hindering biological theorizing in particular. Greater emphasis on qualitative properties of modeled systems would help counteract this tendency, and Levins subsequently developed one method of qualitative modeling, loop analysis, to study a wide variety of biological phenomena. Qualitative modeling has been criticized for being conceptually and (...) methodologically problematic. As a clear example of a qualitative modeling method, loop analysis shows this criticism is indefensible. The method has, however, some serious limitations. This paper describes loop analysis, its limitations, and attempts to clarify the differences between quantitative and qualitative modeling, in content and objective. Loop analysis is but one of numerous types of qualitative analysis, so its limitations do not detract from the currently underappreciated and underdeveloped role qualitative modeling could have within science. (shrink)
The complexity of cognitive emulation of human diagnostic reasoning is the major challenge in the implementation of computer-based programs for diagnostic advice in medicine. We here present an epistemological model of diagnosis with the ultimate goal of defining a high-level language for cognitive and computational primitives. The diagnostic task proceeds through three different phases: hypotheses generation, hypotheses testing and hypotheses closure. Hypotheses generation has the inferential form of abduction (from findings to hypotheses) constrained under the criterion of plausibility. Hypotheses testing (...) is achieved by a deductive inference (from generated hypotheses to expected findings), followed by an eliminative induction, constrained under the criterion of covering, which matches expected findings against patient''s findings to select the best explanation. Hypotheses closure is a deductive-inductive type of inference very similar to the inferences operating in hypotheses testing. In this case induction matches the consequences of the generated hypotheses against the patient''s characteristics or preferences under the criterion of utility. By using the language exploited in this epistemological model, it is possible to describe the cognitive tasks underlying the most influential knowledge-based diagnostic systems. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility (...) of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing user interfaces. Methodological implications for knowledge engineering and for human-computer interaction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer interfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting with the computer.We show that in a first step (...) it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, user interfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computer interaction. (shrink)
During the last decade, scholars have identified a number of factors that pose significant challenges to effective business ethics education. This article offers a "coping-modeling, problem-solving" (CMPS) approach (Cunningham, 2006) as one option for addressing these concerns. A rationale supporting the use of the CMPS framework for courses on ethical decisionmaking in business is provided, following which the implementation processes for this program are described. Evaluative data collected from N = 101 undergraduate business students enrolled in a third year (...) required course on ethical decision-making in business indicated that the CMPS model is a promising alternative for both overcoming teaching challenges and for facilitating skill acquisition in the areas of ethical recognition, judgment, and action. Limitations and directions for future research are discussed. (shrink)
The strategies of action employed by a human subject in order to perceive simple 2-D forms on the basis of tactile sensory feedback have been modelled by an explicit computer algorithm. The modelling process has been constrained and informed by the capacity of human subjects both to consciously describe their own strategies, and to apply explicit strategies; thus, the strategies effectively employed by the human subject have been influenced by the modelling process itself. On this basis, (...) good qualitative and semi-quantitative agreement has been achieved between the trajectories produced by a human subject, and the traces produced by a computer algorithm. The advantage of this reciprocal modelling option, besides facilitating agreement between the algorithm and the empirically observed trajectories, is that the theoretical model provides an explanation, and not just a description, of the active perception of the human subject. (shrink)
Organizational leaders face environmental challenges and pressures that put them under ethical risk. Navigating this ethical risk is demanding given the dynamics of contemporary organizations. Traditional models of ethical decision-making (EDM) are an inadequate framework for understanding how leaders respond to ethical dilemmas under conditions of uncertainty and equivocality. Sensemaking models more accurately illustrate leader EDM and account for individual, social, and environmental constraints. Using the sensemaking approach as a foundation, previous EDM models are revised and extended to comprise a (...) conceptual model of leader EDM. Moreover, the underlying factors in the model are highlighted—constraints and strategies. Four trainable, compensatory strategies (emotion regulation, self-reflection, forecasting, and information integration) are proposed and described that aid leaders in navigating ethical dilemmas in organizations. Empirical examinations demonstrate that tactical application of the strategies may aid leaders in making sense of complex and ambiguous ethical dilemmas and promote ethical behavior. Compensatory tactics such as these should be central to organizational ethics initiatives at the leader level. (shrink)