We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...) at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool. (shrink)
What Is Biodiversity? is a theoretical and conceptual exploration of the biological world and how diversity is valued. Maclaurin and Sterelny explore not only the origins of the concept of biodiversity, but also how that concept has been shaped by ecology and more recently by conservation biology. They explain the different types of biodiversity important in evolutionary theory, developmental biology, ecology, morphology and taxonomy and conclude that biological heritage is rich in not just one biodiversity but many. Maclaurin and Sterelny (...) also explore the case for the conservation of these biodiversities using option value theory, a tool borrowed from economics. (shrink)
We divide analytic metaphysics into naturalistic and non-naturalistic metaphysics. The latter we define as any philosophical theory that makes some ontological (as opposed to conceptual) claim, where that ontological claim has no observable consequences. We discuss further features of non-naturalistic metaphysics, including its methodology of appealing to intuition, and we explain the way in which we take it to be discontinuous with science. We outline and criticize Ladyman and Ross's 2007 epistemic argument against non-naturalistic metaphysics. We then present our own (...) argument against it. We set out various ways in which intellectual endeavours can be of value, and we argue that, in so far as it claims to be an ontological enterprise, non-naturalistic metaphysics cannot be justified according to the same standards as science or naturalistic metaphysics. The lack of observable consequences explains why non-naturalistic metaphysics has, in general, failed to make progress, beyond increasing the standards of clarity and precision in expressing its theories. We end with a series of objections and replies. (shrink)
If, as the new tenseless theory of time maintains, there are no tensed facts, then why do our emotional lives seem to suggest that there are? This question originates with Prior’s ‘Thank Goodness That’s Over’ problem, and still presents a significant challenge to the new B-theory of time. We argue that this challenge has more dimensions to it than has been appreciated by those involved in the debate so far. We present an analysis of the challenge, showing the different questions (...) that a B-theorist must answer in order to meet it. The debate has focused on the question of what is the object of my relief when an unpleasant experience is past. We outline the prevailing response to this question. The additional, and neglected, questions are, firstly—‘Why does the same event elicit different emotional responses from us depending on whether it is in the past, present, or future?’ And secondly—‘Why do we care more about proximate future pain than about distant future pain?’ We give B-theory answers to these questions, which appeal to evolutionary considerations. (shrink)
A concise but informative overview of AI ethics and policy. -/- Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring homeowners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, and voters in liberal democracies? Authored by experts in fields (...) ranging from computer science and law to philosophy and cognitive science, this book offers a concise overview of moral, political, legal and economic implications of AI. It covers the basics of AI's latest permutation, machine learning, and considers issues such as transparency, bias, liability, privacy, and regulation. -/- Both business and government have integrated algorithmic decision support systems into their daily operations, and the book explores the implications for our lives as citizens. For example, do we take it on faith that a machine knows best in approving a patient's health insurance claim or a defendant's request for bail? What is the potential for manipulation by targeted political ads? How can the processes behind these technically sophisticated tools ever be transparent? The book discusses such issues as statistical definitions of fairness, legal and moral responsibility, the role of humans in machine learning decision systems, “nudging” algorithms and anonymized data, the effect of automation on the workplace, and AI as both regulatory tool and target. (shrink)
This project has been supported by the Australian Government through the Australian Research Council (project number CS170100008); the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet. ACOLA collaborates with the Australian Academy of Health and Medical Sciences and the New Zealand Royal Society Te Apārangi to deliver the interdisciplinary Horizon Scanning reports to government. The aims of the project which produced this report are: 1. Examine the transformative role that artificial intelligence may play in (...) different sectors of the economy, including the opportunities, risks and challenges that advancement presents. 2. Examine the ethical, legal and social considerations and frameworks required to enable and support broad development and uptake of artificial intelligence. 3. Assess the future education, skills and infrastructure requirements to manage workforce transition and support thriving and internationally competitive artificial intelligence industries. (shrink)
The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up (...) to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally. (shrink)
This chapter explores the idea that phylogenetic diversity plays a unique role in underpinning conservation endeavour. The conservation of biodiversity is suffering from a rapid, unguided proliferation of metrics. Confusion is caused by the wide variety of contexts in which we make use of the idea of biodiversity. Characterisations of biodiversity range from all-variety-at-all-levels down to variety with respect to single variables relevant to very specific conservation contexts. Accepting biodiversity as the sum of a large number of individual measures results (...) in an empirically intractable framework. However, large-scale decisions cannot be based on biodiversity variables inferred from local conservation imperatives because the variables relevant to the many systems being compared would be incommensurate with one another. We therefore need some general conception of biodiversity that would make tractable such large-scale environmental decision-marking. We categorise the large array of strategies for the measurement of biodiversity into four broad groups for consideration as general measures of biodiversity. We compare common moral justifications for the conservation of biodiversity and conclude that some form of instrumental value is the most plausible justification for biodiversity conservation. Although this is often interpreted as a reliance on option value, we opt for a broadly consequentialist characterisation of biodiversity conservation. We conclude that the best justified general measure of biodiversity will be some form of phylogenetic diversity. (shrink)
The notion of innateness is widely used, particularly in philosophy of mind, cognitive science and linguistics. Despite this popularity, it remains a controversial idea. This is partly because of the variety of ways in which it can be explicated and partly because it appears to embody the suggestion that we can determine the relative causal contributions of genes and environment in the development of biological individuals. As these causes are not independent, the claim is metaphysically suspect. This paper argues that (...) there is a plausible reconstruction of the notion of innateness. This involves defining it sufficiently broadly to cover most of the current usages as well as making it an informational rather than a causal property. This has two consequences. Firstly, innateness becomes a matter of degree. Secondly, we have to abandon the idea, originally proposed by ethologists, that innate traits are necessarily the products of genetic information. (shrink)
A common approach in the Philosophy of Time, particularly in enquiry into the metaphysical nature of time, has been to examine various aspects of the nature of human temporal experience, and ask what, if anything, can be discerned from this about the nature of time itself. Many human traits have explanations that reside in facts about our evolutionary history. We ask whether features of human temporal experience might admit of such evolutionary explanations. We then consider the implications of any proposed (...) evolutionary explanations for the veridicality of these experiences, and for the truth-value of folk beliefs about time that are based on them. (shrink)
Molecular Weismannism is the claim that: “In the development of an individual, DNA causes the production both of DNA (genetic material) and of protein (somatic material). The reverse process never occurs. Protein is never a cause of DNA”. This principle underpins both the idea that genes are the objects upon which natural selection operates and the idea that traits can be divided into those that are genetic and those that are not. Recent work in developmental biology and in philosophy of (...) biology argues that an acceptance of Molecular Weismannism requires the tacit assumption that genetic causes are different in kind from other developmental causes. They argue that if this assumption proves to be unwarranted then we should abandon, not just gene selectionism and gene centred functional solutions to the units of selection problem, but also the very notion that there is any such thing as a “genetic trait”. A group of possible causal distinctions (proximity, ultimacy and specificity) are explored and found wanting. It is argued that an extended version of information theory, while not strong enough to support Molecular Weismannism, will support both the claim that traits can be divided into those that are genetic and those that are not as well as the claim that there is good reason to privilege genetic causes within evolutionary and developmental explanations. The outcome of this for the units of selection debate is explored. (shrink)
Commentary on “The transmission sense of information” by Carl T. Bergstrom and Martin Rosvall Content Type Journal Article Pages 191-194 DOI 10.1007/s10539-010-9233-3 Authors James Maclaurin, University of Otago, Dunedin, New Zealand Journal Biology and Philosophy Online ISSN 1572-8404 Print ISSN 0169-3867 Journal Volume Volume 26 Journal Issue Volume 26, Number 2.
The idea that some biological characteristics are innate, while controversial, is widespread in many academic disciplines. Neither philosophy nor science has outgrown the need to talk about traits, which, for a variety of reasons, appear to be inherent in biological populations. Philosophical claims of this nature are to be found in theories of moral sense, rational capacities, the way in which perception structures experience and so on. Scientific claims about innate traits are to be found in the study of animal (...) behaviour and most famously in the relatively recent rise of nativism in cognitive science. In this tradition, Noam Chomsky and his heirs argue that much of our capacity to decipher verbal information is innate. David Marr defends a similar position with respect to the interpretation of visual information. (shrink)
Rationis Defensor is a volume of previously unpublished essays celebrating the life and work of Colin Cheyne. It celebrates his dedication to rational enquiry and his philosophical style. It also celebrates the distinctive brand of naturalistic philosophy for which Otago has become known. Contributors to the volume include a wide variety of philosophers, all with a personal connection to Colin, and all of whom are, in their own way, defenders of rationality.
Edited book containing the following essays: 1 Getting over Gettier, Alan Musgrave.- 2 Justified Believing: Avoiding the Paradox Gregory W. Dawes.- 3 Literature and Truthfulness,Gregory Currie.- 4 Where the Buck-passing Stops, Andrew Moore.- 5 Universal Darwinism: Its Scope and Limits, James Maclaurin, - 6 The Future of Utilitarianism,Tim Mulgan. 7 Kant on Experiment, Alberto Vanzo.- 8 Did Newton ʻFeignʼ the Corpuscular Hypothesis? Kirsten Walsh.- 9 The Progress of Scotland: The Edinburgh Philosophical Societies and the Experimental Method, Juan Gomez.- 10 Propositions: (...) Truth vs. Existence, Heather Dyke.- 11 Against Advanced Modalizing, Josh Parsons.- 12 Spread Worlds, Plenitude and Modal Realism: A Problem for DavidLewis, Charles R. Pigden and Rebecca E. B. Entwisle.- 13 Defending Quine on Ontological Commitment. 14. The Scandal of Platonism, Vladimír Svoboda.- 15 A Neglected Reply to Prior's Dilemma J. C. Beall. 16 Mathematical and Empirical Concepts, Pavel Materna.- 17 Post-Fregean Thoughts on Propositional Unity, Bjørn Jespersen.- 18 Best-path Theorem Proving: Compiling Derivations, Martin Frické.- 19 Is Imperative Inference Impossible?, Hannah Clark-Younger. . (shrink)
Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 AI and (...) the employment relationship; Chapter 4 Consumers, professions and society. The report includes recommendations to the New Zealand Government. (shrink)
This paper develops an account of evolutionary progress for use in the field of evolutionary economics. Previous work is surveyed and a new account set out, based on the idea of evolvability as it has been used recently in evolutionary developmental biology. The biological underpinnings of this idea are explained using examples of a series of phenomena that influence the evolvability of biological systems. It is further argued that selection pressures and developmental processes are sufficiently similar to make this biological (...) concept useful in economics. The new account is defended against a number of common objections to the notion of progress in evolving systems, including the claim that all stipulated measures of evolutionary progress are essentially arbitrary. It is argued that progress, understood as an increase in evolvability over time, is both philosophically well-justified and provides useful predictive and explanatory resources to those seeking to understand and manipulate evolving economic systems. (shrink)
In Molecular Models: Philosophical Papers on Molecular Biology, Sahotra Sarkar presents a historical and philosophical analysis of four important themes in philosophy of science that have been influenced by discoveries in molecular biology. These are: reduction, function, information and directed mutation. I argue that there is an important difference between the cases of function and information and the more complex case of scientific reduction. In the former cases it makes sense to taxonomise important variations in scientific and philosophical usage of (...) the terms function and information . However, the variety of usage of reduction across scientific disciplines (and across philosophy of science) makes such taxonomy inappropriate. Sarkar presents reduction as a set of facts about the world that science has discovered, but the facts in question are remarkably disparate; variously semantic, epistemic and ontological. I argue that the more natural conclusion of Sarkar’s analysis is eliminativism about reduction as a scientific concept. (shrink)
Many things evolve: species, languages, sports, tools, biological niches, and theories. But are these real instances of natural selection? Current assessments of the proper scope of Darwinian theory focus on the broad similarity of cultural or non-organic processes to familiar central instances of natural selection. That similarity is analysed in terms of abstract functional descriptions of evolving entities (e.g. replicators, interactors, developmental systems etc). These strategies have produced a proliferation of competing evolutionary analyses. I argue that such reasoning ought not (...) to be employed in arbitrating debates about whether particular phenomena count as instances of natural selection. My argument is based on hierarchical functional descriptions of natural selection. I suggest that natural selection ought not to be thought of as a single process but rather as a series of processes which can be analysed in terms of a hierarchy of functional descriptions (in much the same way as many people think of cognition). This, in turn, casts doubt on the idea that it is possible in principle to settle debates about whether particular phenomena count as instances of natural selection. (shrink)
Grammatical Evolution (GE) has a long history in evolutionary computation. Central to the behaviour of GE is the use of a linear representation and grammar to map individuals from search spaces into problem spaces. This genotype to phenotype mapping is often argued as a distinguishing property of GE relative to other techniques, such as context-free grammar genetic programming (CFG-GP). Since its initial description, GE research has attempted to incorporate information from the grammar into crossover, mutation, and individual initialisation, blurring the (...) distinction between genotype and phenotype and creating GE variants closer to CFG-GP. This is argued to provide GE with the "best of both worlds", allowing degrees of grammatical bias to be introduced into operators to best suit the given problem. This paper examines the behaviour of three grammar-based search methods on several problems from previous GE research. It is shown that, unlike CFG-GP, the performance of "pure" GE on the examined problems closely resembles that of random search. The results suggest that further work is required to determine the cases where the "best of both worlds" of GE are required over a straight CFG-GP approach. (shrink)
This article responds to a commentary by Christian Schubert on our 'Evolvability and Progress in Evolutionary Economics'. Our response elaborates the key disagreement between Schubert and us, namely, our views about the purpose of an account of progress in evolutionary economics.
Philosophers differ widely in the extent to which they condone the exploration of the realms of possibilia. Some are very enamoured of thought experiments in which human intuition is trained upon the products of human imagination. Others are much more sceptical of the fruits of such purely cognitive explorations. That said, it is clear that human beings cannot dispense with modal speculation altogether. Rationality rests upon the ability to make decisions and that in turn rests upon the ability to learn (...) about what is possible and what is probable. Thus, on pain of irrationality, we must have some means of exploring other possible worlds. Thankfully, intuition is not the only aid we have at our disposal. Science also is in the business of finding regularities, which hold counterfactually. Scientific theory tells us about the likelihood of particular outcomes flowing from particular processes given particular background conditions. Thus, it also tells us about the contents of 1 of 21 other possible worlds. One consequence of the possibility of such inferences has been a theoretical interest, not just in the contents, but also in the geography of the domain of all possibly worlds. Metaphysicians, epistemologists and philosophers of language are very familiar with locutions such as “nearby possible worlds”. Similarly, evolutionary theory tells us that there is little chance of us discovering an organism that is mammal-like in most respects except in having six limbs. It’s not that we know such an organism to be impossible, but rather that we think it would be the product of an evolutionary history very different to the actual history of life on earth. Put another way, such organisms would be denizens of distant possible worlds. Clearly then, both biology and philosophy have ample motivation to be interested in the reasoning and evidence that supports such claims. Seemingly, in both disciplines there is a certain lure to this modal cartography, but ought we in fact to be convinced of its merits? Is it science or philosophy or not a good example of either? What sort of problems can it solve? What sort of problems will it create? How might we test its accuracy? In his excellent book Theoretical Morphology: The Concept and Its Applications, George McGhee provides an admirable introduction to the complex theoretical landscape surrounding the exploration of possible biological form. (shrink)