Neither our evolutionary past, nor our pre-literate culture, has prepared humanity for the use of technology to provide records of the past, records which in many context become normative for memory. The demand that memory be true, rather than useful or pleasurable, has changed our social and psychological under-standing of ourselves and our fellows. The current vogue for lifelogging, and the rapid proliferation of digital memory-supporting technologies, may accelerate this change, and create dilemmas for policymakers, designers and social thinkers.
Motivated by the significant amount of successful collaborative problem solving activity on the Web, we ask: Can the accumulated information propagation behavior on the Web be conceived as a giant machine, and reasoned about accordingly? In this paper we elaborate a thesis about the computational capability embodied in information sharing activities that happen on the Web, which we term socio-technical computation, reflecting not only explicitly conditional activities but also the organic potential residing in information on the Web.
This ground-breaking and timely book explores how big data, artificial intelligence and algorithms are creating new types of agency, and the impact that this is having on our lives and the rule of law. Addressing the issues in a thoughtful, cross-disciplinary manner, the authors examine the ways in which data-driven agency is transforming democratic practices and the meaning of individual choice. Leading scholars in law, philosophy, computer science and politics analyse the latest innovations in data science and machine learning, assessing (...) the actual and potential implications of these technologies. They investigate how this affects our understanding of such concepts as agency, epistemology, justice, transparency and democracy, and advocate a precautionary approach that takes the effects of data-driven agency seriously without taking it for granted. Scholars and students of law, ethics and philosophy, in particular legal, political and democratic theory, will find this book a compelling and invaluable read, as will computer scientists interested in the implications of their own work. It will also prove insightful for academics and activists working on privacy, fairness and anti-discrimination. Contributors include: J.E. Cohen, G. de Vries, S. Delacroix, P. Dumouchel, C. Ess, M. Garnett, E.H. Gerding, R. Gomer, C. Graber, M. Hildebrandt, C. Maple, K. O'Hara, P. Ohm, m.c. schraefel, D. Stevens, N. van Dijk, M. Veale. (shrink)
This book argues that the novelist Joseph Conrad's work speaks directly to us in a way that none of his contemporaries can. Conrad’s scepticism, pessimism, emphasis on the importance and fragility of community, and the difficulties of escaping our history are important tools for understanding the political world in which we live. He is prepared to face a future where progress is not inevitable, where actions have unintended consequences, and where we cannot know the contexts in which we act. _Heart (...) of Darkness_ uncovers the rotten core of the Eurocentric myth of imperialism as a way of bringing enlightenment to 'native peoples’ – lessons which are relevant once more as the Iraq debacle has undermined the claims of liberal democracy to universal significance. The result can hardly be called a political programme, but Conrad’s work is clearly suggestive of a sceptical conservatism of the sort described by the author in his 2005 book _After Blair: Conservatism Beyond Thatcher_. The difficult part of a Conradian philosophy is the profundity of his pessimism – far greater than Oakeshott, with whom Conrad does share some similarities. Conrad’s work poses the question of how far we as a society are prepared to face the consequences of our ignorance. (shrink)
A conceptual analysis of trust in terms of trustworthiness is set out, where trustworthiness is the property of an agent that she does what she claims she will do, and trust is an attitude taken by an agent to another, that the former believes that the latter is trustworthy. This analysis is then used to explore issues in the deployment of trustworthy digital systems online. The ideas of a series of philosophers from the Enlightenment – Hobbes, Burke, Rousseau, Hume, Smith (...) and Kant – are examined in the light of this exploration to suggest how we might proceed in the Digital Enlightenment to ensure that systems are both trustworthy and trusted. (shrink)
In recent work MacPherson argues that the standard method of modeling belief logically, as a necessity operator in a modal logic, is doomed to fail. The problem with normal modal logics as logics of belief is that they treat believers as "ideal" in unrealistic ways (i.e., as omnidoxastic); however, similar problems re-emerge for candidate non-normal logics. The authors argue that logics used to model belief in artificial intelligence (AI) are also flawed in this way. But for AI systems, omnidoxasticity is (...) impossible because of their finite nature, and this fact can be exploited to produce operational models of fallible belief. The relevance of this point to various philosophical views about belief is discussed. (shrink)
The paper attempts to establish the importance of addressing what Chalmers calls the ‘easy problems’ of consciousness, at the expense of the ‘hard problem’. One pragmatic argument and two philosophical arguments are presented to defend this approach to consciousness, and three major theories of consciousness are criticized in this light. Finally, it is shown that concentration on the easy problems does not lead to eliminativism with respect to consciousness.