In this paper data from a Tanzanian horticultural population are used to assess whether mother’s kin network size predicts several measures of children’s health and well-being, and whether any kin effects are modified by household socioeconomic status. This hypothesis is further tested with a questionnaire on maternal attitudes towards kin. Results show small associations between measures of maternal kin network size and child mortality and children’s growth performance. Together these results suggest that kin positively influence child health, but the effects (...) are small and it is unlikely that the high prevalence of undernutrition observed in this setting is influenced by the availability of kin. (shrink)
Abstract Drawing upon evolutionary theory and the work of Daniel Dennett and Nicholas Agar, I offer an argument for broadening discussion of the ethics of disenhancement beyond animal welfare concerns to a consideration of animal “biopreferences”. Short of rendering animals completely unconscious or decerebrate, it is reasonable to suggest that disenhanced animals will continue to have some preferences. To the extent that these preferences can be understood as what Agar refers to as “plausible naturalizations” for familiar moral concepts like beliefs (...) and desires, then they can make moral claims on us and provide support for intuitive opposition to disenhancement. Content Type Journal Article Category Original Paper Pages 1-6 DOI 10.1007/s11569-012-0142-6 Authors John Hadley, School of Humanities and Communication Arts, University of Western Sydney, 7.G.10b, Bankstown Campus, Locked Bag 1797, Penrith, NSW 2751, Australia Journal NanoEthics Online ISSN 1871-4765 Print ISSN 1871-4757. (shrink)
In this paper I extend liberal property rights theory to nonhuman animals.I sketch an outline of a nonhuman animal property rights regime and argue that both proponents of animal rights and ecological holism ought to accept nonhuman animal property rights. To conclude I address a series of objections.
In this paper I extend orthodox just-war terrorism theory to the phenomenon of extremist violence on behalf of nonhuman animals.I argue that most documented cases of so-called animal rights extremism do not quality as terrorism.
Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980''s a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, (...) based upon the criterion ofconstant time processing.The present paper examines Kirsh''s claims. It is argued that Kirsh fails to demonstrate that our usage of explicit and implicit is seriously confused or inconsistent. Furthermore, it is argued that Kirsh''s new formulation of the explicit-implicit distinction is excessively stringent, in that it banishes virtually all sentences of natural language from the realm of explicit representation. By contrast, the present paper proposes definitions for explicit and implicit which preserve most of our strong intuitions concerning straightforward uses of these terms. It is also argued that the distinction delineated here sustains the meaningfulness of the abovementioned debate between classicists and connectionists. (shrink)
This paper addresses the question of whether Medicaid is in fact a high-cost program after adjusting for the health of the people it covers. We compare and simulate annual per capita medical spending for lower-income people (families with incomes under 200% of poverty) covered for a full year by either Medicaid or private insurance. We first show that low-income privately insured enrollees and Medicaid enrollees have very different socioeconomic and health characteristics. We then present simulated comparisons based on multivariate statistical (...) models that estimate the effects of private and Medicaid coverage on the likelihood of using services, and the level of expenditures, given any use, holding constant demographic, economic, and health status characteristics. The simulations demonstrate that if people with Medicaid coverage—with their health status, disability, and chronic conditions—were given private coverage, they would cost considerably more than they do today. Conversely, if the privately insured were given Medicaid coverage, spending would be lower. We find no evidence that spending differences between Medicaid and private coverage for low-income people are due to lower service use by Medicaid beneficiaries. We conclude that most of the difference in expenditures is due to differences in provider payment rates. (shrink)
abstract Most moral philosophers accept that we have obligations to provide at least some aid and assistance to distant strangers in dire need. Philosophers who extend rights and obligations to nonhuman animals, however, have been less than explicit about whether we have any positive duties to free‐roaming or ‘wild’ animals. I argue our obligations to free‐roaming nonhuman animals in dire need are essentially no different to those we have to severely cognitively impaired distant strangers. I address three objections to the (...) view that we have positive duties to free‐roaming nonhuman animals, and respond to the predation objection to animal rights. (shrink)
In this paper I bring together self-defense theory and animal rights theory. The extension of self-defense theory to animals poses a serious problem for proponents of animal rights. If, in line with orthodox self-defense theory, a person is a legitimate target for third-party self-defensive violence if they are responsible for a morally unjustified harm without an acceptable excuse; and if, in line with animal rights theory, people that consume animal products are responsible for unjustified harm to animals, then many millions, (...) if not billions, of otherwise law abiding and decent people will be legitimate targets for third-party self-defense violence on behalf of animals. I call this problem: the multiple inappropriate targets problem for animal rights. (shrink)
In this paper I introduce the ‘changing the subject’ problem. When proponents of animal protection use terms such as dignity and respect they can be fairly accused of shifting debate from welfare to rights because the terms purportedly refer to properties and values that are logically distinct from the capacity to suffer and the moral significance of causing animals pain. To avoid this problem and ensure that debate proceeds in the familiar terms of the established welfare paradigm, I present an (...) expressivist analysis of animal rights vocabulary. When terms such as dignity and respect are understood in line with the theory of moral language use known as expressivism, proponents of animal protection that use these terms can escape the charge of changing the subject. Drawing upon Helm’s theory of love, I show how the usage of rights vocabulary can be respectable way for people to register their concern for the welfare of animals, even at times when it is unlikely that the animals concerned are suffering. Tying rights vocabulary to welfare via expressivism aligns the aims of animal rights with welfare without the theoretical problems associated with attempts to ‘reduce’ dignity or respect to natural behaviour or inherent value. (shrink)
Fodor's and Pylyshyn's stand on systematicity in thought and language has been debated and criticized. Van Gelder and Niklasson, among others, have argued that Fodor and Pylyshyn offer no precise definition of systematicity. However, our concern here is with a learning based formulation of that concept. In particular, Hadley has proposed that a network exhibits strong semantic systematicity when, as a result of training, it can assign appropriate meaning representations to novel sentences (both simple and embedded) which contain words in (...) syntactic positions they did not occupy during training. The experience of researchers indicates that strong systematicity in any form is difficult to achieve in connectionist systems.Herein we describe a network which displays strong semantic systematicity in response to Hebbian, connectionist training. During training, two-thirds of all nouns are presented only in a single syntactic position (either as grammatical subject or object). Yet, during testing, the network correctly interprets thousands of sentences containing those nouns in novel positions. In addition, the network generalizes to novel levels of embedding. Successful training requires a, corpus of about 1000 sentences, and network training is quite rapid. The architecture and learning algorithms are purely connectionist, but classical insights are discernible in one respect, viz, that complex semantic representations spatially contain their semantic constituents. However, in other important respects, the architecture is distinctly non-classical. (shrink)
The past decade has witnessed the emergence of a novel stance on semantic representation, and its relationship to context sensitivity. Connectionist-minded philosophers, including Clark and van Gelder, have espoused the merits of viewing hidden-layer, context-sensitive representations as possessing semantic content, where this content is partially revealed via the representations'' position in vector space. In recent work, Bodén and Niklasson have incorporated a variant of this view of semantics within their conception of semantic systematicity. Moreover, Bodén and Niklasson contend that they (...) have produced experimental results which not only satisfy a kind of context-based, semantic systematicity, but which, to the degree that reality permits, effectively deals with challenges posed by Fodor and Pylyshyn (1988), and Hadley (1994a). The latter challenge involved well-defined criteria for strong semantic systematicity. This paper examines the relevant claims and experiments of Bodén and Niklasson. It is argued that their case fatally involves two fallacies of equivocation; one concerning ''semantic content'' and the other concerning ''novel test sentences''. In addition, it is argued that their ultimate construal of context sensitive semantics contains serious confusions. These confusions are also found in certain publications dealing with "latent semantic analysis". Thus, criticisms presented here have relevance beyond the work of Bodén and Niklasson. (shrink)
Recent proposals to improve public communication about animal-based biomedical research have been narrowly focused on reforming biomedical journal submission guidelines. My suggestion for communication reform is broader in scope reaching beyond the research community to healthcare communicators and ultimately the general public. The suggestion is for researchers to provide journalists and public relations practitioners with concise summaries of their ‘animal use data’. Animal use data is collected by researchers and intended for the public record but is rarely, if ever, given (...) significant media exposure. By providing healthcare communicators with specific details about their animal use, researchers can play a role informing people about a matter of serious public interest and help to promote a more open and publicly accountable animal research culture. (shrink)
At present, the prevailing Connectionist methodology forrepresenting rules is toimplicitly embody rules in neurally-wired networks. That is, the methodology adopts the stance that rules must either be hard-wired or trained into neural structures, rather than represented via explicit symbolic structures. Even recent attempts to implementproduction systems within connectionist networks have assumed that condition-action rules (or rule schema) are to be embodied in thestructure of individual networks. Such networks must be grown or trained over a significant span of time. However, arguments (...) are presented herein that humanssometimes follow rules which arevery rapidly assignedexplicit internal representations, and that humans possessgeneral mechanisms capable of interpreting and following such rules. In particular, arguments are presented that thespeed with which humans are able to follow rules ofnovel structure demonstrates the existence of general-purpose rule following mechanisms. It is further argued that the existence of general-purpose rule following mechanisms strongly indicates that explicit rule following is not anisolated phenomenon, but may well be a common and important aspect of cognition. The relationship of the foregoing conclusions to Smolensky''s view of explicit rule following is also explored. The arguments presented here are pragmatic in nature, and are contrasted with thekind of arguments developed by Fodor and Pylyshyn in their recent, influential paper. (shrink)
In this paper I suggest practical measures that can address some familiar, and some not so familiar, commercial obstacles to increasing media coverage of dissident opinion.The kernel of my proposal is for media codes of practice and workplace norms to reflect an ethical distinction between different kinds of commercial speech.
This study reports the findings of a survey of television news directors drawn from a Radio?Television News Directors Association (RTNDA) sample. Rationale for the study centers around an apparent trend in television news to extend its ethical boundaries to include high proportions of sensationalism, privacy invasion, deception, unfair reporting, and the like. Five principles of journalism ethics? truth, justice, freedom, humaneness, and stewardship?are used as the framework for discussing results of 34 ethical questions. Results show most news directors clearly favor (...) traditional ethical solutions to ethical questions related to truth, justice, freedom, and stewardship principles. There is more disagreement among news directors in responses related to humaneness. (shrink)
In this paper we make an argument for limiting veterinary expenditure on companion animals. The argument combines two principles: the obligation to give and the self-consciousness requirement. In line with the former, we ought to give money to organisations helping to alleviate preventable suffering and death in developing countries; the latter states that it is only intrinsically wrong to painlessly kill an individual that is self-conscious. Combined, the two principles inform an argument along the following lines: rather than spending inordinate (...) amounts of money on veterinary care when a companion animal is sick or injured, it is better to give the money to an aid organisation and painlessly kill the animal. (shrink)
Marcus et al.’s experiment (1999) concerning infant ability to distinguish between differing syntactic structures has prompted connectionists to strive to show that certain types of neural networks can mimic the infants’ results. In this paper we take a closer look at two such attempts: Shultz and Bale [Shultz, T.R. and Bale, A.C. (2001), Infancy 2, pp. 501–536] Altmann and Dienes [Altmann, G.T.M. and Dienes, Z. (1999) Science 248, p. 875a]. We were not only interested in how well these two models (...) matched the infants’ results, but also whether they were genuinely learning the grammars involved in this process. After performing an extensive set of experiments, we found that, at first blush, Shultz and Bale’s model (2001) replicated the infant’s known data, but the model largely failed to learn the grammars. We also found serious problems with Altmann and Dienes’ model (1999), which fell short of matching any of the infant’s results and of learning the syntactic structure of the input patterns. (shrink)
It is well understood and appreciated that Gödel’s Incompleteness Theorems apply to sufficiently strong, formal deductive systems. In particular, the theorems apply to systems which are adequate for conventional number theory. Less well known is that there exist algorithms which can be applied to such a system to generate a gödel-sentence for that system. Although the generation of a sentence is not equivalent to proving its truth, the present paper argues that the existence of these algorithms, when conjoined with Gödel’s (...) results and accepted theorems of recursion theory, does provide the basis for an apparent paradox. The difficulty arises when such an algorithm is embedded within a computer program of sufficient arithmetic power. The required computer program (an AI system) is described herein, and the paradox is derived. A solution to the paradox is proposed, which, it is argued, illuminates the truth status of axioms in formal models of programs and Turing machines. (shrink)
Within AI and the cognitively related disciplines, there exist a multiplicity of uses of belief. On the face of it, these differing uses reflect differing views about the nature of an objective phenomenon called belief. In this paper I distinguish six distinct ways in which belief is used in AI. I shall argue that not all these uses reflect a difference of opinion about an objective feature of reality. Rather, in some cases, the differing uses reflect differing concerns with special (...) AI applications. In other cases, however, genuine differences exist about the nature of what we pre-theoretically call belief. To an extent the multiplicity of opinions about, and uses of belief, echoes the discrepant motivations of AI researchers. The relevance of this discussion for cognitive scientists and philosophers arises from the fact that (a) many regard theoretical research within AI as a branch of cognitive science, and (b) even if theoretical AI is not cognitive science, trends within AI influence theories developed within cognitive science. It should be beneficial, therefore, to unravel the distinct uses and motivations surrounding belief, in order to discover which usages merely reflect differing pragmatic concerns, and which usages genuinely reflect divergent views about reality. (shrink)
Libertarians concede that non-autonomous sentient beings pose a problem for their theory. But, while they acknowledge that libertarianism denies non-autonomous sentient beings basic moral rights, libertarians have overlooked how their theory also denies non-autonomous sentient beings basic moral powers. In this article, I show how the libertarian entitlement theory of justice, specifically, the theory for the original acquisition of holdings, denies non-autonomous sentient beings the moral power to originally acquire or make property. Attempts to avoid this problem by appealing to (...) interests or preference autonomy are likely to be unsuccessful. (shrink)
In this paper I argue that the potentially environmentally destructive scope of a libertarian property rights regime can be narrowed by applying reasonable limits to those rights. I will claim that excluding the right to destroy from the libertarian property rights bundle is consistent with self-ownership and Robert Nozick’s interpretation of the Lockean proviso.
In “Do Animals have an Interest in Liberty?” Alasdair Cochrane brings some much needed attention to the ethics of animal confinement (2009a). Of particular significance is the question of whether confinement in itself is bad for nonhuman animal (hereafter, animal) well-being. If confinement conditions cause animals to suffer or frustrate their preferences it is safe to assume that liberty or freedom (following Cochrane, I use the terms interchangeably) would be instrumentally good for them. But, what about seemingly benign conditions of (...) confinement in which animals are not suffering or having their preferences frustrated? Is confinement in such cases bad for their well-being? In other words, do animals have an .. (shrink)
An earlier article by the author, "quine and strawson on logical theory" ("analysis" volume 34, pages 207-208), is expanded and defended against criticisms made by charles sayward in "the province of logic" ("analysis" volume 36, pages 47-48). it is shown that quine's definition of logical truth presupposes an understanding of "possibility," even if the term 'sentence' is used set-theoretically, and that if quine is allowed the concept of "possibility," then strawson must be allowed modal concepts for his purposes. the traditional (...) claim that an argument is valid if and only if the corresponding conditional is necessary is also defended. (shrink)
In the late 1980s, there were many who heralded the emergence of connectionism as a new paradigm – one which would eventually displace the classically symbolic methods then dominant in AI and Cognitive Science. At present, there remain influential connectionists who continue to defend connectionism as a more realistic paradigm for modeling cognition, at all levels of abstraction, than the classical methods of AI. Not infrequently, one encounters arguments along these lines: given what we know about neurophysiology, it is just (...) not plausible to suppose that our brains are digital computers. Thus, they could not support a classical architecture. I argue here for a middle ground between connectionism and classicism. I assume, for argument's sake, that some form(s) of connectionism can provide reasonably approximate models – at least for lower-level cognitive processes. Given this assumption, I argue on theoretical and empirical grounds that most human mental skills must reside in separate connectionist modules or sub-networks. Ultimately, it is argued that the basic tenets of connectionism, in conjunction with the fact that humans often employ novel combinations of skill modules in rule following and problem solving, lead to the plausible conclusion that, in certain domains, high level cognition requires some form of classical architecture. During the course of argument, it emerges that only an architecture with classical structure could support the novel patterns of information flow and interaction that would exist among the relevant set of modules. Such a classical architecture might very well reside in the abstract levels of a hybrid system whose lower-level modules are purely connectionist. (shrink)
In his discussion of results which I (with Michael Hayward) recently reported in this journal, Kenneth Aizawa takes issue with two of our conclusions, which are: (a) that our connectionist model provides a basis for explaining systematicity within the realm of sentence comprehension, and subject to a limited range of syntax (b) that the model does not employ structure-sensitive processing, and that this is clearly true in the early stages of the network''s training. Ultimately, Aizawa rejects both (a) and (b) (...) for reasons which I think are ill-founded. In what follows, I offer a defense of our position. In particular, I argue (1) that Aizawa adopts a standard of explanation that many accepted scientific explanations could not meet, and (2) that Aizawa misconstrues the relevant meaning of structure-sensitive process. (shrink)
A process-oriented model of belief is presented which permits the representation of nested propositional attitudes within first-order logic. The model (NIM, for nested intensional model) is axiomatized, sense-based (via intensions), and sanctions inferences involving nested epistemic attitudes, with different agents and different times. Because NIM is grounded upon senses, it provides a framework in which agents may reason about the beliefs of another agent while remaining neutral with respect to the syntactic forms used to express the latter agent's beliefs. Moreover, (...) NIM provides agents with a conceptual map, interrelating the concepts of knowledge, belief, truth, and a number of cognate concepts, such as infers, retracts, and questions. The broad scope of NIM arises in part from the fact that its axioms are represented in a novel extension of first-order logic, -FOL (presented herein). -FOL simultaneously permits the representation of truth ascriptions, implicit self-reference, and arbitrarily embedded sentences within a first-order setting. Through the combined use of principles derived from Frege, Montague, and Kripke, together with context-sensitive semantic conventions, -FOL captures the logic of truth inferences, while avoiding the inconsistencies exhibited by Tarski. Applications of -FOL and NIM to interagent reasoning are described and the soundness and completeness of -FOL are established herein. (shrink)
Recent social science research indicates that animal rights philosophy plays the functional role of a religion in the lives of the most committed animal rights advocates. In this paper, I apply the functional religion thesis to the recent debate over the place of direct action animal rights advocacy in democratic theory. I outline the usefulness of the functional religion thesis and explain its implications for theorists that call for deliberative theories to be more inclusive of coercive forms of activism.
: J. Baird Callicott's claim to have unified environmentalism and animal liberation should be rejected by holists and liberationists. By making relations of intimacy necessary for moral considerability, Callicott excludes from the moral community nonhuman animals unable to engage in intimate relations due to the circumstances of their confinement. By failing to afford moral protection to animals in factory farms and research laboratories, Callicott's biosocial moral theory falls short of meeting a basic moral demand of liberationists. Moreover, were Callicott to (...) include factory farm and research animals inside the moral community by affording them universal or non-communitarian rights, his theory would fall foul of environmentalists who seek to promote ecosystem stability and integrity via therapeutic hunting. If factory farm and research animals can have rights irrespective of their particular circumstances, then so can free- roaming animals from overabundant and exotic species. (shrink)
In his most comprehensive book on the subject , Roger Penrose provides arguments to demonstrate that there are aspects of human understanding which could not, in principle, be attained by any purely computational system. His central argument relies crucially on oft-cited theorems proven by Gödel and Turing. However, that key argument has been the subject of numerous trenchant critiques, which is unfortunate if one believes Penrose's conclusions to be plausible. In the present article, alternative arguments are offered in support of (...) Penrose-like conclusions . It is argued here that a purely computational agent, which lacked conscious awareness, would be incapable of possessing crucial concepts and of understanding certain kinds of geometrically-based proofs. Specifically, it is argued that the acquisition of human-like concepts of countable and non-denumerable infinities, and human-like comprehension of a particular geometrically motivated proof does require conscious apprehension of the subject matter involved. This does not preclude the possibility that a computational agent might come to possess the requisite consciousness, but it is argued that if this consciousness does arise within the agent, it does so, at best, as an emergent, contingent side-effect of the underlying processes involved. (shrink)
Third-party intervention has been the focus of recent debate in self-defense theory. When is it permissible for third-parties to intervene on behalf of an innocent victim facing an unjustified attack or threat? In line with recent self-defense theory, if an attacker is morally responsible for their actions and does not have an acceptable excuse then it is permissible for third-parties to use proportionate violence against them.