Skip to main content
Log in

Implementations, interpretative malleability, value-laden-ness and the moral significance of agent-based social simulations

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The focus of social simulation on representing the social world calls for an investigation of whether its implementations are inherently value-laden. In this article, I investigate what kind of thing implementation is in social simulation and consider the extent to which it has moral significance. When the purpose of a computational artefact is simulating human institutions, designers with different value judgements may have rational reasons for developing different implementations. I provide three arguments to show that different implementations amount to taking moral stands via the artefact. First, the meaning of a social simulation is not homogeneous among its users, which indicates that simulations have high interpretive malleability. I place malleability as the condition of simulation to be a metaphorical vehicle for representing the social world, allowing for different value judgements about the institutional world that the artefact is expected to simulate. Second, simulating the social world involves distinguishing between malfunction of the artefact and representation gaps, which reflect the role of meaning in simulating the social world and how meaning may or not remain coherent among the models that constitute a single implementation. Third, social simulations are akin to Kroes’ (Kroes, Technical artefacts: creations of mind and matter: a philosophy of engineering design, Springer, Dordrecht, 2012) techno-symbolic artefacts, in which the artefact’s effectiveness relative to a purpose hinges not only on the functional effectiveness of the artefact’s structure, but also on the artefact’s meaning. Meaning, not just technical function, makes implementations morally appraisable relative to a purpose. I investigate Schelling’s model of ethnic residential segregation as an example, in which different implementations amount to taking different moral stands via the artefact.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See e.g. Edmonds and Meyer (2017) and JASSS—The Journal of Artificial Societies and Social Simulation, http://jasss.soc.surrey.ac.uk.

  2. These things need not be concrete objects or social structures in the external environment; they may be numbers, abstract structures, imaginary entities.

  3. Arnold (2014), for example, contrasts the empirical usefulness of the Schelling model with the empirical uselessness of Axelrod’s reiterated Prisoner’s Dilemma simulations of the evolution of cooperation. According to Arnold (2014), while the latter model has ‘remained entirely unsuccessful in terms of generating explanations for empirical instances of cooperation’ the assumptions on which the former model rests can be tested empirically. As regards the Schelling model, on Arnold's account, whether individuals have a threshold for how many neighbours of a different colour they tolerate, and whether they move to another neighbourhood if this threshold is passed, is an assumption that can be tested empirically with the usual methods of empirical social research.

  4. See, for example, Eric W. Weisstein, ‘von Neumann Neighborhood’, from MathWorld—A Wolfram Web Resource. https://mathworld.wolfram.com/vonNeumannNeighborhood.html.

  5. For instance, if an individual has, at most, three neighbours and a minimum tolerance level of a third of colour-like neighbours, s/he accepts only situations in which two or more of her/his neighbours are of like colour, which corresponds to two-thirds of minimum tolerance level. If, as in Schelling’s model, there are at most eight neighbours and a minimum tolerance of one-third, the individual accepts 1 like neighbour out of 1 neighbour in all, 1 like neighbour at least out of 2 (1/2), 2/3, 2/4, 2/5, 3/6, 3/7 and 3/8. After many runs it amounts to approximately one-half of effective tolerance on a weighted average. Under this interpretation, the authors claim the model shows a linear relation between tolerance and segregation levels.

References

  • Anzola D (2021) The theory-practice gap in the evaluation of agent-based social simulations. Science in Context (forthcoming)

  • Arnold E (2014) What’s wrong with social simulations? Monist 97(3):361–379

    Article  Google Scholar 

  • Axelrod R (1997a) The dissemination of culture: a model with local convergence and global polarization. J Conflict Resolut 41(2):203–226

    Article  Google Scholar 

  • Axelrod R (1997b) The complexity of cooperation—agent-based models of competition and collaboration. Princeton University Press, Princeton

    Book  Google Scholar 

  • Bedau MA (1997) Weak emergence. In: Tomberlin J (ed) Philosophical perspectives: mind, causation, and world, vol 11. Blackwell, Oxford, pp 375–399

    Google Scholar 

  • Boero R, Squazzoni F (2005) Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. J Artif Soc Soc Simul 8(4):6

    Google Scholar 

  • Brey P (2014) Virtual reality and computer simulation. In: Sandler RL (ed) Ethics and emerging technologies. Palgrave Macmillan, London

    Google Scholar 

  • David N, Sichman JS, Coelho H (2005) The logic of the method of agent-based simulation in the social sciences: empirical and intentional adequacy of computer programs. J Artif Soc Soc Simul 8(4):2

    Google Scholar 

  • David N, Sichman JS, Coelho H (2007) Simulation as formal and generative social science: the very idea. In: Gershenson C, Aerts D, Edmonds B (eds) Worldviews, science, and us: philosophy and complexity. World Scientific Publishing, Singapore, pp 266–284

    Chapter  Google Scholar 

  • Edmonds B, Meyer R (2017) Simulating social complexity—a handbook. Springer, Berlin

    Book  MATH  Google Scholar 

  • Edmonds B (2003) Towards an ideal social simulation language. In: Sichman JS et al (eds) Multi-agent-based simulation II, LNAI, vol 2581. Springer, New York, pp 105–124

    Chapter  Google Scholar 

  • Edmonds B, Moss S (2005) From KISS to KIDS—an ‘anti-simplistic’ modelling approach. In: Davidsson P et al (eds) Multi-agent-based simulation 2004, LNAI, 3415. Springer, New York, pp 130–144

    Google Scholar 

  • Epstein J (1999) Agent-based computational models and generative social science. Complexity 4(5):41–59

    Article  MathSciNet  Google Scholar 

  • Ethically Aligned Design (2019) A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. The IEEE global initiative for ethical considerations in artificial intelligence and autonomous systems, IEEE Standards Association

  • Fetzer J (1999) The role of models in computer science. Monist 82(1):20–36

    Article  Google Scholar 

  • Fetzer J (2001) Thinking and computing: computers as special kinds of signs. In: Bergman M, Queiroz J (eds) The commens encyclopedia: the digital encyclopedia of peirce studies. Springer, New York

    Google Scholar 

  • Flanagan M, Howe D, Nissenbaum H (2008) Embodying values in technology: theory and practice. In: Van den Hoven J, Weckert J (eds) Information technology and moral philosophy (Cambridge studies in philosophy and public policy). Cambridge University Press, Cambridge, pp 322–353

    Google Scholar 

  • Forsé M, Parodi M (2010) Low levels of ethnic intolerance do not create large ghettos: a discussion about an interpretation of Schelling’s model. L’année Sociologique 60(2):445–473. https://doi.org/10.3917/anso.102.0445

    Article  Google Scholar 

  • Fresco N, Primiero G (2013) Miscomputation. Philos Technol 26:253–272

    Article  Google Scholar 

  • Friedman B, Kahn PH Jr, Borning A (2008) Value sensitive design and information systems. The handbook of information and computer ethics. Wiley, Amsterdam

    Google Scholar 

  • Gilbert N (2008) Agent-based models (quantitative applications in the social sciences). Sage Publications, New York

    Book  Google Scholar 

  • Hegselmann R (2017) Thomas C. Schelling and James M. Sakoda: the intellectual, technical, and social history of a model. J Artif Soc Soc Simul 20(3):15

    Article  Google Scholar 

  • Johnson DG (2006) Computer systems: moral entities but not moral agents. Ethics Inf Technol 8:195–204

    Article  Google Scholar 

  • Kirman A (2010) A comment on ‘low levels of ethnic intolerance do not create large ghettos’ by Michel Forsé and Maxime Parodi. L’année Sociologique 60(2):475–480. https://doi.org/10.3917/anso.102.0475

    Article  Google Scholar 

  • Kraemer F, van Overveld K, Peterson M (2011) Is there an ethics of algorithms? Ethics Inf Technol 13:251

    Article  Google Scholar 

  • Kroes P (2012) Technical artefacts: Creations of mind and matter: a philosophy of engineering design. Springer, Dordrecht

    Book  Google Scholar 

  • Moor J (1985) What is computer ethics? Metaphilosophy 16:266–275

    Article  Google Scholar 

  • Piccinini G (2008) Computation without representation. Philos Stud 137:205

    Article  MathSciNet  Google Scholar 

  • Pinch TJ, Bijker WE (1984) The social construction of facts and artefacts: or how the sociology of science and the sociology of technology might benefit each other. Soc Stud Sci 14(3):399–441

    Article  Google Scholar 

  • Rapaport WJ (1999) Implementation is semantic interpretation. Monist 82(1):109–130

    Article  Google Scholar 

  • Rolfe M (2010) A comment on ‘low levels of ethnic intolerance do not create large ghettos’ by Michel Forsé and Maxime Parodi. L’année Sociologique 60(2):481–492

    Article  Google Scholar 

  • Sakoda JM (1971) The checkerboard model of social interaction. J Math Sociol 1(1):119–132

    Article  Google Scholar 

  • Sargent R (2005) Verification and validation of simulation models. In: Kuhl ME et al (eds) Proceedings of the 37th winter simulation conference, pp 130–143

  • Schelling TC (1971) Dynamic models of segregation. J Math Sociol 1:143–186

    Article  MATH  Google Scholar 

  • Searle J (1995) The construction of social reality. The Free Press, New York

    Google Scholar 

  • Smith BC (1995) Limits of correctness in computers. In: Johnson D, Nissembaum H (eds) Computers, ethics & social responsibility. Prentice Hall, Hoboken, pp 456–469

    Google Scholar 

  • Turner R (2014) Programming languages as technical artefacts. Philos Technol 27(3):377–397

    Article  Google Scholar 

  • Turner R (2018) Computational artefacts—towards a philosophy of computer science. Springer, New York

    MATH  Google Scholar 

  • van den Hoven J (2007) ICT and value sensitive design. In: Goujon P, Lavelle S, Duquenoy P, Kimppa K, Laurent V (eds) The information society: innovation, legitimacy, ethics and democracy, vol 233. Springer, Boston

    Google Scholar 

  • Vu TM, Probst C, Nielsen A, Bai HM, Petra S, Buckley C, Strong M, Brennan A, Purshouse RC (2020) A software architecture for mechanism-based social systems modelling in agent-based simulation models. J Artif Soc Soc Simul 23(3):1

    Article  Google Scholar 

Download references

Acknowledgements

I thank the anonymous reviewers for their careful reading of the manuscript and their useful comments and suggestions. This work was partially developed while the author was visiting the Department of Values, Technology and Innovation, Sections Ethics/Philosophy of Technology, at TU Delft, Netherlands. The author was partially supported by the Portuguese FCT-Fundação para a Ciência e a Tecnologia (SFRH/BSAB/114462/2016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nuno David.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

David, N. Implementations, interpretative malleability, value-laden-ness and the moral significance of agent-based social simulations. AI & Soc 38, 1565–1577 (2023). https://doi.org/10.1007/s00146-021-01304-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01304-y

Keywords

Navigation