The relationship of word-meaning to speaker's-meaning has not been examined thoroughly enough. Some philosophical problems are solved and others made plainer if the full consequences of a proper relationship between these two is worked out.
When first published twenty years ago, The Logic of Medicine presented a new way of thinking about clinical medicine as a scholarly discipline as well as a profession. Since then, advances in research and technology have revolutionized both the practice and theory of medicine. In this new, extensively rewritten edition, Dr. Murphy includes changes to show how these different areas of scholarship may affect details of "the logic of medicine" without compromising its fundamental coherence. New to this edition are discussions (...) of the challenge of the flood of new empirical data, new ideas in genetics, molecular biology, homeostasis, pathogenesis, cancer, aging, and Alzheimer's disease. Murphy also comments on such new theoretical topics as dynamic systems, chaos, and fractals and their impact on the burgeoning fields of philosophy and practice of medicine. Written with medical students in mind, the book includes a glossary, many new examples, and problems for solutions with comments on each. An entirely new chapter deals with modeling. Clinicians and researchers will also find the principles thought-provoking and illuminating. (shrink)
The central topic for this book is the ethics of treating individuals as though they are members of groups. The book raises many interesting questions, including: Why do we feel so much more strongly about discrimination on certain grounds – e.g. of race and sex - than discrimination on other grounds? Are we right to think that discrimination based on these characteristics is especially invidious? What should we think about ‘rational discrimination’ – ‘discrimination’ which is based on sound statistics? To (...) take just one of dozens of examples from the book. Suppose a landlord turns away a prospective tenant, because this prospective tenant is of a particular ethnicity – arguing that statistics show that one in four of this group have been shown in the past to default on their rent. That seems clearly unfair to people of this ethnicity. But we are routinely being judged in this way – not just on the basis of our ethnicity, but assumptions are made about us and decisions taken about us based on our gender, religion, job, post-code, hobbies, blood-group, nationality, etc. Now suppose that another landlord turns away a convicted criminal, arguing that one in four of convicted criminals have been shown to be unreliable rent payers. Is our intuition the same as before? Should it be? This book is suitable for all students of philosophy, especially those with an interest in applied ethics. (shrink)
The inspectability of after-images has been denied. A typical claim is Ilham Dilman's: ‘I cannot say my apprehension of the after-image I see has changed but not the after-image itself’, for, he says, appearance and reality are one as regards the after-image. His reason is that this is a logical consequence of the fact that other people have no possible basis for correcting what I say about the after-image I see.
what is now the mainstream view as to the best way forward in the dream of engineering reliable software systems out of autonomous agents. The way of using formal logics to specify, implement and verify distributed systems of interacting units using a guiding analogy of beliefs, desires and intentions. The implicit message behind the book is this: Distributed Artificial Intelligence (DAI) can be a respectable engineering science. It says: we use sound formal systems; can cite established philosophical foundations; and will (...) be able to build reliable and flexible software systems. (shrink)
This is a survey of the development of the philosophy of perception over the past twelve years. There are four sections. Part I deals largely with arguments for the propositionalizing of perception and for those types of externally founded realism that eschew inner representation. Part ii is devoted to three books that put the case for sense-Data (pennycuick, Jackson, Ginet) and some of the arguments against (pitcher). Part iii outlines james j gibson's psychological theory. Part iv takes up the arguments (...) for a theory of 'dual coding', Combining a non-Epistemic inner presentation as a first stage with epistemic selection as an independent module. The mental-Image argument (kosslyn, Pylyshyn) and wittgenstein's remarks on psychology, Recently published, Are brought in as relevant to this issue. (edited). (shrink)
Introduction: Vulnerability is a poorly understood concept in research ethics, often aligned to autonomy and consent. A recent addition to the literature represents a taxonomy of vulnerability developed by Kipnis, but this refers to the conduct of clinical trials rather than qualitative research, which may raise different issues. Aim: To examine issues of vulnerability in cancer and palliative care research obtained through qualitative interviews. Method: Secondary analysis of qualitative data from 26 black Caribbean and 19 white British patients with advanced (...) cancer. Results: Four domains of vulnerability derived from Kipnis’s taxonomy were identified and included: (i) communicative vulnerability, represented by participants impaired in their ability to communicate because of distressing symptoms; (ii) institutional vulnerability, which referred to participants who existed under the authority of others—for example, in hospital; (iii) deferential vulnerability, which included participants who were subject to the informal authority or the independent interests of others; (iv) medical vulnerability, which referred to participants with distressing medical conditions; and (v) social vulnerability, which included participants considered to belong to an undervalued social group. Participants from both ethnic groups populated all these domains, but those who were black Caribbean were more present among the socially vulnerable. Conclusions: Current classifications of vulnerability require reinterpretation when applied to qualitative research at the end of life. We recommend that researchers and research ethics committees reconceptualise vulnerability using the domains identified in this study and consider the research context and interviewers’ skills. (shrink)
This discussion takes up an attack by Jerrold Aronson (seconded by Rom Harre) on the use made by Norwood R. Hanson of the Gestalt-Switch Analogy in the philosophy of science. Aronson's understanding of what is implied in a gestalt switch is shown to be flawed. In his endeavor to detach conceptual understanding from perceptual identification he cites several examples, without realizing the degree to which such gestalt switches can affect conceptualizing or how conceptualizing can affect gestalts. In particular, he has (...) not confronted the possibility of such gestalt selection being involved in the basic identification of what we term "entities". (shrink)
At the beginning of the twentieth century, the French philosopher of science Edmond Goblot wrote three prescient papers on function and teleology. He advanced the remarkable thesis that functions are, as a matter of conceptual analysis, selected effects. He also argued that “selection” must be understood broadly to include both evolutionary natural selection and intelligent design. Here, I do three things. First, I give an overview of Goblot’s thought. Second, I identify his core thesis about function. Third, I argue (...) that, despite its ingenuity, Goblot’s expansive construal of “function” cannot be right. Still, Goblot deserves (long-overdue) credit for his work. (shrink)
A published simulation model Riolo et al. 2001 ) was replicated in two independent implementations so that the results as well as the conceptual design align. This double replication allowed the original to be analysed and critiqued with confidence. In this case, the replication revealed some weaknesses in the original model, which otherwise might not have come to light. This shows that unreplicated simulation models and their results can not be trusted - as with other kinds of experiment, simulations need (...) to be independently replicated. (shrink)
In this paper I will argue that, in general, where the evidence supports two theories equally, the simpler theory is not more likely to be true and is not likely to be nearer the truth. In other words simplicity does not tell us anything about model bias. Our preference for simpler theories (apart from their obvious pragmatic advantages) can be explained by the facts that humans are known to elaborate unsuccessful theories rather than attempt a thorough revision and that a (...) fixed set of data can only justify adjusting a certain number of parameters to a limited degree of precision. No extra tendency towards simplicity in the natural world is necessary to explain our preference for simpler theories. Thus Occam's razor eliminates itself (when interpreted in this form). (shrink)
The SDML programming language which is optimized for modelling multi-agent interaction within articulated social structures such as organizations is described with several examples of its functionality. SDML is a strictly declarative modelling language which has object-oriented features and corresponds to a fragment of strongly grounded autoepistemic logic. The virtues of SDML include the ease of building complex models and the facility for representing agents flexibly as models of cognition as well as modularity and code reusability.
This essay focuses on the persistence of conciliarist constitutionalism down into the seventeenth century, and on the particular way in which the Gallican author, Edmond Richer (1559-1631), framed it in his sweeping and influential critiques of the papalist ecclesiology. In the tradition established by his fifteenth- and sixteenth-century predecessors in the Parisian theology faculty, Richer's formulation of conciliar theory was essentially political in nature. As a result, it lent itself readily to use in the cause of constitutionalist aspiration by (...) such eighteenth-century critics of French monarchical policy as Nicolas Le Gros and Gabriel-Nicolas Maultrot. (shrink)
Thus far in the development of the discipline of medical ethics, the overriding concern has been with solutions to specific problems. But discussion is hampered by lack of understanding of the scope and methodology of medical ethics, and its scientific and philosophical basis. In Underpinnings of Medical Ethics Edmond A. Murphy, James J. Butzow, and Edward L. Suarez-Murias offer much-needed clarification of the purview, ontological basis, and methodology of a medical ethics that is to be comprehensive and yet readily (...) accepted by all. The authors begin by describing the scope of the analysis and discussing possible ethical systems and paradigms. They then deal with the structures and concepts necessary in the formulation of a coherent philosophy: normality and disease, scientific and juridical law, certainty and certitude, decisions. Finally, they introduce particular human dimensions, such as quality of life, pain, and responsibility. Throughout, case examples illustrate the authors' theoretical framework. (shrink)
It is argued that complexity is not attributable directly to systems or processes but rather to the descriptions of their `best' models, to reflect their difficulty. Thus it is relative to the modelling language and type of difficulty. This approach to complexity is situated in a model of modelling. Such an approach makes sense of a number of aspects of scientific modelling: complexity is not situated between order and disorder; noise can be explicated by approaches to excess modelling error; and (...) simplicity is not truth indicative but a useful heuristic when models are produced by a being with a tendency to elaborate in the face of error. (shrink)
An investigation into the conditions conducive to the emergence of heterogeneity amoung agents is presented. This is done by using a model of creative artificial agents to investigate some of the possibilities. The simulation is based on Brian Arthur's 'El Farol Bar' model but extended so that the agents also learn and communicate. The learning and communication is implemented using an evolutionary process acting upon a population of strategies inside each agent. This evolutionary learning process is based on a Genetic (...) Programming algorithm. This is chosen to make the agents as creative as possible and thus allow the outside edge of the simulation trajectory to be explored. A detailed case study from the simulations show how the agents have differentiated so that by the end of the run they had taken on qualitatively different roles. It provides some evidence that the introduction of a flexible learning process and an expressive internal representation has facilitated the emergence of this heterogeneity. (shrink)
Voilà un ouvrage d’une grande originalité, si grande qu’un spécialiste aussi consommé de la biologie aristotélicienne que Pierre Pellegrin – qui préface le livre – peut écrire (p. 10) : « Mes années d’études ne me donnent pas plus de compétence que n’en a n’importe quel lecteur nouveau du corpus biologique aristotélicien pour apprécier un travail aussi neuf ». Quant à moi, je ne saurais mieux faire, pour en donner une première idée, que d’emprunter quelques phrases à cette préface : (...) « […] l.. (shrink)
On se souvient peut-être de la parution au début de l’année 2008 du livre de Sylvain Gougenheim, Aristote au Mont Saint-Michel – Les racines grecques de l’Europe chrétienne (Le Seuil, Paris, 2008), des recensions laudatives dont cet ouvrage fit l’objet dans de grands quotidiens (le Monde, le Figaro), et de la polémique qui s’ensuivit. Cette polémique et l’ouvrage qui l’a suscitée semblent aujourd’hui déjà oubliés, les projecteurs de l’actualité, comme on dit, s’étant braqués sur d’autres obje..
Much of experimental philosophy consists of surveying 'folk' intuitions about philosophically relevant issues. Are the results of these surveys evidence that the relevant folk intuitions cannot be predicted from the ‘armchair’? We found that a solid majority of philosophers could predict even results claimed to be 'surprising'. But, we argue, this does not mean that such experiments have no role at all in philosophy.