Abstract In this paper, we explore how the application of technological tools has reshaped food production systems in ways that foster large-scale outbreaks of foodborne illness. Outbreaks of foodborne illness have received increasing attention in recent years, resulting in a growing awareness of the negative impacts associated with industrial food production. These trends indicate a need to examine systemic causes of outbreaks and how they are being addressed. In this paper, we analyze outbreaks linked to ground beef and salad greens. (...) These case studies are informed by personal interviews, site visits, and an extensive review of government documents and peer-reviewed literature. To explore these cases, we draw from actor-network theory and political economy to analyze the relationships between technological tools, the design of industrial production systems, and the emergence and spread of pathogenic bacteria. We also examine if current responses to outbreaks represent reflexive change. Lastly, we use the myth of Prometheus to discuss ethical issues regarding the use of technology in food production. Our findings indicate that current tools and systems were designed with a narrow focus on economic efficiency, while overlooking relationships with pathogenic bacteria and negative social impacts. In addition, we find that current responses to outbreaks do not represent reflexive change and a continued reliance on technological fixes to systemic problems may result in greater problems in the future. We argue that much can be learned from the myth of Prometheus. In particular, justice and reverence need to play a more significant role in guiding production decisions. Content Type Journal Article Category Articles Pages 1-26 DOI 10.1007/s10806-011-9357-8 Authors Diana Stuart, Kellogg Biological Station and Department of Sociology, Michigan State University, 3700 East Gull Lake Drive, Hickory Corners, MI 49060, USA Michelle R. Woroosz, Department of Agricultural Economics and Rural Sociology, Auburn University, 306A Comer Hall, Auburn, AL 36849, USA Journal Journal of Agricultural and Environmental Ethics Online ISSN 1573-322X Print ISSN 1187-7863. (shrink)
Stuart, Jennie Review(s) of: Hands off not an option! The reminiscence museum mirror of a humanistic care philosophy, by Professor Dr Hans Marcel Becker assisted by Inez van den Dobbelsteen- Becker and Topsy Ros. Eburon Academic Publishers, Delft, 2011 272 pp.
We report on the successful fabrication of polycrystalline silicon films by aluminium-induced crystallisation (AIC) of Radio frequency (rf) plasma-enhanced chemical vapour deposited (PECVD) a-Si films. The effects of annealing at different temperatures (300 and 400°C), below the eutectic temperature of the Si?Al binary system, on the crystallisation process have been studied. This work emphasises the important role of the position of the Al layer with respect to the Si layer on the crystallisation process. The properties of the crystallised films were (...) characterised using X-ray diffraction, Raman spectroscopy, ellipsometry, field-emission scanning electron microscopy (FESEM) and atomic force microscopy (AFM). With an increase in the annealing temperature, it was found that the degree of crystallisation of annealed a-Si/Al and Al/a-Si films increased. The results showed that the arrangement where the Al was on top of the a-Si had a more prominent effect on crystallisation enhancement than when Al was below the a-Si. The interfacial layer between the Al and a-Si film is crucial because it influences the layer-exchange process during annealing. The oxide layer formed between the Al and the a-Si layers greatly retards the crystallisation process in the case of the Al/Si arrangement. Our investigations suggest that polycrystalline Si films formed by AIC can be used as a seed layer in solar cell fabrication. (shrink)
The weak-beam technique of electron microscopy (Cockayne, Ray and Whelan 1969) has been used to study constrictions found on extended dislocation lines in a copper-silicon alloy. The nature of these constrictions has been determined using in situ heating of the alloy.
Stuart, Stephen Review(s) of: On being certain: Believing you are right even when you're not, by Robert A. Burton, St Martin's Griffin, New York, 2008, (xiv + 256 pp., index, pbk, ISBN 978-0-312-54152-1).
Stuart, Stephen Review(s) of: Wicked company: Freethinkers and friendship in pre-revolutionary Paris, by Philipp Blom, Weidenfeld and Nicolson, London, 2011, (xxii + 361 pp., index, ISBN 978-0-297-85818-8).
In his classic 1936 essay "On the Concept of Logical Consequence", Alfred Tarski used the notion of satisfaction to give a semantic characterization of the logical properties. Tarski is generally credited with introducing the model-theoretic characterization of the logical properties familiar to us today. However, in his book, The Concept of Logical Consequence, Etchemendy argues that Tarski's account is inadequate for quite a number of reasons, and is actually incompatible with the standard model-theoretic account. Many of his criticisms are meant (...) to apply to the model-theoretic account as well. In this paper, I discuss the following four critical charges that Etchemendy makes against Tarski and his account of the logical properties: (1) (a) Tarski's account of logical consequence diverges from the standard model-theoretic account at points where the latter account gets it right. (b) Tarski's account cannot be brought into line with the model-theoretic account, because the two are fundamentally incompatible. (2) There are simple counterexamples (enumerated by Etchemendy) which show that Tarski's account is wrong. (3) Tarski committed a modal fallacy when arguing that his account captures our pre-theoretical concept of logical consequence, and so obscured an essential weakness of the account. (4) Tarski's account depends on there being a distinction between the "logical terms" and the "non-logical terms" of a language, but (according to Etchemendy) there are very simple (even first-order) languages for which no such distinction can be made. Etchemendy's critique raises historical and philosophical questions about important foundational work. However, Etchemendy is mistaken about each of these central criticisms. In the course of justifying that claim, I give a sustained explication and defense of Tarski's account. Moreover, since I will argue that Tarski's account and the modeltheoretic account really do come to the same thing, my subsequent defense of Tarski's account against Etchemendy's other attacks doubles as a defense against criticisms that would apply equally to the familiar model-theoretic account of the logical properties. (shrink)
In this paper we consider the concept of a self-aware agent. In cognitive science agents are seen as embodied and interactively situated in worlds. We analyse the meanings attached to these terms in cognitive science and robotics, proposing a set of conditions for situatedness and embodiment, and examine the claim that internal representational schemas are largely unnecessary for intelligent behaviour in animats. We maintain that current situated and embodied animats cannot be ascribed even minimal self-awareness, and offer a six point (...) definition of embeddedness, constituting minimal conditions for the evolution of a sense of self. This leads to further analysis of the nature of embodiment and situatedness, and a consideration of whether virtual animats in virtual worlds could count as situated and embodied. We propose that self-aware agents must possess complex structures of self-directed goals; multi-modal sensory systems and a rich repertoire of interactions with their worlds. Finally, we argue that embedded agents will possess or evolve local co-ordinate systems, or points of view, relative to their current positions in space and time, and have a capacity to develop an egocentric space. None of these capabilities are possible without powerful internal representational capacities. (shrink)
It is argued that, based on Kant's descriptive metaphysics, one can prescribe the necessary metaphysical underpinnings for the possibility of conscious experience in an artificial system. This project is developed by giving an account of the a priori concepts of the understanding in such a system. A specification and implementation of the nomological conditions for a conscious system allows one to know a priori that any system possessing this structure will be conscious; thus enabling us to avoid possible false-indicators of (...) consciousness like that offered in a behaviouristic analysis. This is an alternative approach to the bottom-up or top-down approaches adopted by, for example CYC (Lenat and Feigenbaum 1992) and COG (Brooks 1994; Brooks and Stein 1993), neither of which, alone, or in some hybrid form, have proved productive. (shrink)
Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of (...) both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science. (shrink)
The crux of this book is expressed in one short sentence from the Preface: 'Unity is a fundamental part of our experience, something that is crucial to its phenomenology' [p.xii], and the crux of this sentence is that the unity of consciousness is not a matter of phenomenal relations existing between distinct experiences – the received view [p.17], but the existence of relations between the contents of experiences – the one experience view [p.25ff]. In its simplest form Tye's claim is (...) that: all our conscious states, whether visual, auditory, olfactory, tactual or gustatory, whether imagistic or emotional are experienced concurrently; they 'are phenomenologically unified ... [and] ... Phenomenological unity is a relation between qualities represented in experience, not between qualities of experiences. [p.36]. (shrink)
Alfred Tarski (1944) wrote that "the condition of the 'essential richness' of the metalanguage proves to be, not only necessary, but also sufficient for the construction of a satisfactory definition of truth." But it has remained unclear what Tarski meant by an 'essentially richer' metalanguage. Moreover, DeVidi and Solomon (1999) have argued in this Journal that there is nothing that Tarski could have meant by that phrase which would make his pronouncement true. We develop an answer to the historical question (...) of what Tarski meant by 'essentially richer' and pinpoint the general result that stands behind his essential richness claim. In defense of Tarski, we then show that each of the several arguments of DeVidi and Solomon are either moot or mistaken. One of the fruits of our investigation is the reclamation of what Tarski took to be his central result on truth. This is a reclamation since: (i) if one does not understand 'essential richness', one does not know what that result is, and (ii) we must unearth a heretofore unrecognized change that occurs in Tarski's view - an alteration of his main thesis in light of a failing he discovered in it. (shrink)
The aim of this paper is to establish the logically necessary preconditions for the existence of self-awareness in an artificial or a natural agent. We examine the terms, agent, situated, embodied, embedded, and representation, as employed ubiquitously in cognitive science, attempting to clarify their meaning and the limits of their use. We discuss the minimal conditions for an agent’s environment constituting a ‘world’ and reject most, though not all, types of virtual world. We argue that to qualify as genuinely situated (...) an agent should function in real time within the dynamic world we inhabit, or some close simulacrum of it. We show that embodied agents will possess or evolve local co-ordinate systems, or points of view, locating, identifying and interacting with objects relative to their current position in space-time, and we discuss various types of embodiment, arguing that most current situated and embodied systems are too limited to be candidates for even the most minimal claim to self-identity. We argue that a truly autonomous agent has to be active in its participation with the world, able to synthesise and order its internal representations from its own point of view, and to do this effectively the agent will have to be embedded. To this end we propose a six point definition of embeddedness. Ultimately we argue for a philosophical-cum-cognitive science model of the self that satisfies essential elements of both sets of definitions of the term. (shrink)
The problem with model-theoretic modal semantics is that it provides only the formal beginnings of an account of the semantics of modal languages. In the case of non-modal language, we bridge the gap between semantics and mere model theory, by claiming that a sentence is true just in case it is true in an intended model. Truth in a model is given by the model theory, and an intended model is a model which has as domain the actual objects of (...) discourse, and which relates these objects in an appropriate manner. However, the same strategy applied to the modal case seems to require an intended modal model whose domain includes mere possibilia. Building on recent work by Christopher Menzel (Nous 1990), I give an account of model-theoretic semantics for modal languages which does not require mere possibilia or intensional entities of any kind. Mcnzel has offered a representational account of model-theoretic modal semantics that accords with actualist scruples, since it does not require possibilia. However, Menzel's view is in the company of other actualists who seek to eliminate possible worlds, but whose accounts tolerate other sorts of abstract, intensional entities, such as possible states of affairs. Menzel's account crucially depends on the existence of properties and relations in intension. I offer a purely extensional, representational account and prove that it does all the work that Mcnzel's account does. The result of this endeavor is an account of modeltheoretic semantics for modal languages requiring nothing but pure sets and the actual objects of discourse. Since ontologically beyond what is prima facie presupposed by the model theory itself. Thus, the result is truly an ontology-free model-theoretic semantics for modal languages. That is to say, getting genuine modal semantics out of the model theory is ontologically cost-free. Since my extensional account is demonstrably no less adcguatc, and yet is at the same time more ontologically frugal, it is certainly to be preferred. (shrink)
According to Timothy Williamson's epistemic view, vague predicates have precise extensions, we just don't know where their boundaries lie. It is a central challenge to his view to explain why we would be so ignorant, if precise borderlines were really there. He offers a novel argument to show that our insuperable ignorance ``is just what independently justified epistemic principles would lead one to expect''. This paper carefully formulates and critically examines Williamson's argument. It is shown that the argument (...) does not explain our ignorance, and is not really apt for doing so. Williamson's unjustified commitment to a controversial and crucial assumption is noted. It is also argued in three different ways that his argument is, in any case, self-defeating – the same principles that drive the argument can be applied to undermine one of its premises. Along the way, Williamson's unstated commitment to a number of other controversial doctrines comes to light. (shrink)
The management literature is replete with studies on business ethics. Unfortunately, most of these studies have dealt exclusively with ethics in large businesses. Although a handful of studies can be found on small business ethics, none has paid attention to the issue of ethics in small minority businesses. Similarly, several studies on ethics have utilized the Wood et al. (1988) 16-vignette ethics scale, although reliability and validity issues associated with the scale have never been fully addressed. In this study, a (...) purification (via content analysis) of the above mentioned scale was performed. Three reliable factors were extracted from the purified scale. They were used to investigate the ethics in small minority businesses. The study found an association between business ethics and demographic and company-related variables. In the case of age of respondents, findings ran counter the usual relationship of age being positively related to ethical attitudes. The implications of these findings are also discussed. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
This paper critiques a recent article in this journal in terms of its use of persuasive techniques. The central issue of the original article by Miles, Munilla and Covin and this paper is whether there should be a change in intellectual property rights to address the needs of impoverished people who are HIV positive or have full blown AIDS and the countries that do not have the means to buy AIDS medication in the absence of subsidies. (...) class='Hi'> This paper argues that patents are state sanctioned monopolies that worked effectively for nearly a century. However, new circumstances and a globally interdependent world represent a new environment calling for an adjustment in the conventional public policy premises underlying patents. Most of the meaning and complexity of this issue is lost to the persuasive techniques of the original article. (shrink)
In "Logical consequence: A defense of Tarski" (Journal of Philosophical Logic, vol. 25, 1996, pp. 617-677), Greg Ray defends Tarski's account of logical consequence against the criticisms of John Etchemendy. While Ray's defense of Tarski is largely successful, his attempt to give a general proof that Tarskian consequence preserves truth fails. Analysis of this failure shows that de facto truth preservation is a very weak criterion of adequacy for a theory of logical consequence and should be replaced by a stronger (...) absence-of-counterexamples criterion. It is argued that the latter criterion reflects the modal character of our intuitive concept of logical consequence, and it is shown that Tarskian consequence can be proved to satisfy this criterion for certain choices of logical constants. Finally, an apparent inconsistency in Ray's interpretation of Tarski's position on the modal status of the consequence relation is noted. (shrink)
Place, practice and status have played significant and interacting roles in the complex history of primatology during the early to mid-twentieth century. This paper demonstrates that, within the emerging discipline of primatology, the field was understood as an essential supplement to laboratory work. Founders argued that only in the field could primates be studied in interaction with their natural social group and environment. Such field studies of primate behavior required the development of existing and new field techniques. The practices and (...) sites developed by American primatologist Clarence Ray Carpenter were used to demonstrate that scientific standards could be successfully applied to the study of primates in the field. In an environment in which many field biologists fought for higher scientific status, Carpenter gradually adopted increasingly interventionist techniques. These techniques raised epistemological problems for studies whose value rested on the naturalness of the behaviors observed. Thus, issues of status shaped field practices and subsequently altered Carpenter's criteria for what constituted natural primate behavior. (shrink)
The Golden rule expression for x-ray absorption spectra (XAS) is typically calculated within a one-particle (quasiparticle) approximation and generally leads to good agreement between theory and experiment. The fact that a quasiparticle approximation works fairly well is surprising, since it neglects satellite excitations and intrinsic losses due to a suddenly created core-hole. The resolution of this paradox requires physics beyond the independent particle approximation. This is discussed here using an effective Green's function formulation based on a quasi-boson model that takes (...) interference between inelastic losses into account. This approach shows that inelastic excitations such as multi-electron excitations tend to be suppressed, and that the XAS is given by a broadened quasiparticle particle approximation, together with weak satellite structure and edge singularity effects. (shrink)