We examine ethical considerations in access to facial transplantation, with implications for promoting health equity. As a form of vascularised composite allotransplantation, FT is still considered innovative with a relatively low volume of procedures performed to date by a small number of active FT programmes worldwide. However, as numbers continue to increase and institutions look to establish new FT programmes, we anticipate that attention will shift from feasibility towards ensuring the benefits of FT are equitably available to those in need. (...) This manuscript assesses barriers to care and their ethical implications across a number of considerations, with the intent of mapping various factors relating to health equity and fair access to FT. Evidence is drawn from an evolving clinical experience as well as published scholarship addressing several dimensions of access to FT. We also explore novel concerns that have yet to be mentioned in the literature. There are no data in this work. (shrink)
Reichenbachian approaches to indexicality contend that indexicals are "token-reflexives": semantic rules associated with any given indexical-type determine the truth-conditional import of properly produced tokens of that type relative to certain relational properties of those tokens. Such a view may be understood as sharing the main tenets of Kaplan's well-known theory regarding content, or truth-conditions, but differs from it regarding the nature of the linguistic meaning of indexicals and also regarding the bearers of truth-conditional import and truth-conditions. Kaplan has criticized these (...) approaches on different counts, the most damaging of which is that they make impossible a "logic of demonstratives". The reason for this is that the token-reflexive approach entails that not two tokens of the same sentential type including indexicals are guaranteed to have the same truth-conditions. In this paper I rebut this and other criticisms of the Reichenbachian approach. Additionally, I point out that Kaplan's original theory of "true demonstratives" is empirically inadequate, and claim that any modification capable of accurately handling the linguistic data would have similar problems to those attributed to the Reichenbachian approach. This is intended to show that the difficulties, no matter how real, are not caused by idiosincracies of the "token-reflexive" view, but by deep facts about indexicality. (shrink)
This paper presents students’ views about honest and dishonest actions within the pharmacy and medical learning environments. Students also offered their views on solutions to ameliorating dishonest action. Three research questions were posed in this paper: (1) what reasons would students articulate in reference to engaging in dishonest behaviours? (2) What reasons would students articulate in reference to maintaining high levels of integrity? (3) What strategies would students suggest to decrease engagement in dishonest behaviours and/or promote honest behaviours? The design (...) of the study incorporated an initial descriptive analysis to interpret students’ responses to an 18-item questionnaire about justifications for dishonest action. This was followed by a qualitative analysis of students’ commentaries in reference to why students would engage in either honest or dishonest action. Finally a qualitative analysis was conducted on students’ views regarding solutions to dishonest action. The quantitative results showed that students were more likely to use time management and seriousness justifications for dishonest actions. The qualitative findings found that students’ actions (honest or dishonest) were guided by family and friends, the need to do well, issues of morality and institutional guidelines. Students suggested that dishonest action could be ameliorated by external agencies and polarised views between punitive and rewards-based mechanisms were offered. These results suggest that these students engaged in dishonest action for various reasons and solutions addressing dishonest action need to consider diverse mechanisms that likely extend beyond the educational institution. (shrink)
The paper examines an alleged distinction claimed to exist by Van Gelder between two different, but equally acceptable ways of accounting for the systematicity of cognitive output (two “varieties of compositionality”): “concatenative compositionality” vs. “functional compositionality.” The second is supposed to provide an explanation alternative to the Language of Thought Hypothesis. I contend that, if the definition of “concatenative compositionality” is taken in a different way from the official one given by Van Gelder (but one suggested by some of his (...) formulations) then there is indeed a different sort of compositionality; however, the second variety is not an alternative to the language of thought in that case. On the other hand, if the concept of concatenative compositionality is taken in a different way, along the lines of Van Gelder's explicit definition, then there is no reason to think that there is an alternative way of explaining systematicity. (shrink)
Descriptive semantic theories purport to characterize the meanings of the expressions of languages in whatever complexity they might have. Foundational semantics purports to identify the kind of considerations relevant to establish that a given descriptive semantics accurately characterizes the language used by a given individual or community. Foundational Semantics I presents three contrasting approaches to the foundational matters, and the main considerations relevant to appraise their merits. These approaches contend that we should look at the contents of speakers’ intuitions; at (...) the deep psychology of users and its evolutionary history, as revealed by our best empirical theories; or at the personal-level rational psychology of those subjects. Foundational Semantics II examines a fourth view, according to which we should look instead at norms enforced among speakers. The two papers aim to determine in addition the extent to which the approaches are really rival, or rather complementary. (shrink)
SummaryThis study used data from the 1984 Family History Survey conducted by Statistics Canada to examine recent trends and patterns of child-spacing among currently married women. Life table and proportional hazards estimates show that Canadian women, particularly those in younger age groups with higher education and longer work experience, start having children late, but have subsequent children rather quickly. This suggests that such women tend to complete childbearing within a compressed time period.
What accounts for the apocalyptic angst that is now so clearly present among Americans who do not subscribe to any religious orthodoxy? Why do so many popular television shows, films, and music nourish themselves on this very angst? And why do so many artists—from Coldplay to Tori Amos to Tom Wolfe—feel compelled to give it expression? It is tempting to say that America’s fears and anxieties are understandable in the light of 9/11, the ongoing War on Terror, nuclear proliferation, and (...) the seemingly limitless capacity of science to continually challenge our conceptions of the universe and ourselves. Perhaps, too, American culture remains so permeated by Protestant Christianity that even avowed skeptics cannot pry themselves from its grip. In _A Consumer’s Guide to the Apocalypse,_ Eduardo Velásquez argues that these answers are too pat. Velásquez’s astonishing thesis is that when we peer into contemporary artists’ creative depiction of our sensibilities we discover that the antagonisms that fuel the current cultural wars stem from the same source. Enthusiastic religions and dogmatic science, the flourishing of scientific reason and the fascination with mystical darkness, cultural triumphalists and multicultural ideologues are all sustained by the same thing: a willful commitment to the basic tenets of the Enlightenment. Velásquez makes his point with insightful readings of the music of Coldplay, Tori Amos, and Dave Matthews and the fiction of Michael Frayn’s _Copenhagen,_ Chuck Palahniuk’s _Fight Club,_ and Tom Wolfe’s _I Am Charlotte Simmons._ Written with grace and humor, and directed toward the lay reader, _A Consumer’s Guide to the Apocalypse_ is a tour de force of cultural analysis. (shrink)
According to a doctrine that I call “Cartesianism”, knowledge – at least the sort of knowledge that inquirers possess – requires having a reason for belief that is reflectively accessible as such. I show that Cartesianism, in conjunction with some plausible and widely accepted principles, entails the negation of a popular version of Fallibilism. I then defend the resulting Cartesian Infallibilist position against popular objections. My conclusion is that if Cartesianism is true, then Descartes was right about this much: for (...) S to know that p, S must have reasons for believing that p which are such that S can know, by reflection alone, that she has those reasons, and that she could not possibly have those reasons if p is not true. Where Descartes went wrong was in thinking that our ordinary, fallible, non-theologically grounded sources of belief (e.g., perception, memory, testimony), cannot provide us with such reasons. (shrink)
Espino, Santamaria, and Garcia-Madruga (2000) report three results on the time taken to respond to a probe word occurring as end term in the premises of a syllogistic argument. They argue that these results can only be predicted by the theory of mental models. It is argued that two of these results, on differential reaction times to end-terms occurring in different premises and in different figures, are consistent with Chater and Oaksford's (1999) probability heuristics model (PHM). It is argued that (...) the third finding, on different reaction times between figures, does not address the issue of processing difficulty where PHM predicts no differences between figures. It is concluded that Espino et al.'s results do not discriminate between theories of syllogistic reasoning as effectively as they propose. (shrink)
The main idea that we want to defend in this paper is that the question of what a logic is should be addressed differently when structural properties enter the game. In particular, we want to support the idea according to which it is not enough to identify the set of valid inferences to characterize a logic. In other words, we will argue that two logical theories could identify the same set of validities, but not be the same logic.
Nowadays global inequalities in access to vaccines seem to be a growing problem. Intellectual Property Rights have been playing an important role both in causing and worsening them. Firstly,...
It is widely accepted that classical logic is trivialized in the presence of a transparent truth-predicate. In this paper, we will explain why this point of view must be given up. The hierarchy of metainferential logics defined in Barrio et al. and Pailos recovers classical logic, either in the sense that every classical inferential validity is valid at some point in the hierarchy ), or because a logic of a transfinite level defined in terms of the hierarchy shares its validities (...) with classical logic. Each of these logics is consistent with transparent truth—as is shown in Pailos —, and this suggests that, contrary to standard opinions, transparent truth is after all consistent with classical logic. However, Scambler presents a major challenge to this approach. He argues that this hierarchy cannot be identified with classical logic in any way, because it recovers no classical antivalidities. We embrace Scambler’s challenge and develop a new logic based on these hierarchies. This logic recovers both every classical validity and every classical antivalidity. Moreover, we will follow the same strategy and show that contingencies need also be taken into account, and that none of the logics so far presented is enough to capture classical contingencies. Then, we will develop a multi-standard approach to elaborate a new logic that captures not only every classical validity, but also every classical antivalidity and contingency. As a€truth-predicate can be added to this logic, this result can be interpreted as showing that, despite the claims that are extremely widely accepted, classical logic does not trivialize in the context of transparent truth. (shrink)
In this article, we will present a number of technical results concerning Classical Logic, ST and related systems. Our main contribution consists in offering a novel identity criterion for logics in general and, therefore, for Classical Logic. In particular, we will firstly generalize the ST phenomenon, thereby obtaining a recursively defined hierarchy of strict-tolerant systems. Secondly, we will prove that the logics in this hierarchy are progressively more classical, although not entirely classical. We will claim that a logic is to (...) be identified with an infinite sequence of consequence relations holding between increasingly complex relata: formulae, inferences, metainferences, and so on. As a result, the present proposal allows not only to differentiate Classical Logic from ST, but also from other systems sharing with it their valid metainferences. Finally, we show how these results have interesting consequences for some topics in the philosophical logic literature, among them for the debate around Logical Pluralism. The reason being that the discussion concerning this topic is usually carried out employing a rivalry criterion for logics that will need to be modified in light of the present investigation, according to which two logics can be non-identical even if they share the same valid inferences. (shrink)
This paper is a reply to Benjamin Smart’s : 319–332, 2013) recent objections to David Armstrong’s solution to the problem of induction : 503–511, 1991). To solve the problem of induction, Armstrong contends that laws of nature are the best explanation of our observed regularities, where laws of nature are dyadic relations of necessitation holding between first-order universals. Smart raises three objections against Armstrong’s pattern of inference. First, regularities can explain our observed regularities; that is, universally quantified conditionals are required (...) for explanations. Second, if Humean’s pattern of inference is irrational, then Armstrong’s pattern of inference is also irrational. Third, universal regularities are the best explanation of our observed regularities. I defend Armstrong’s solution of induction, arguing against these three claims. (shrink)
This quoted passage makes a negative claim – a claim about what we are not doing when we characterize an episode or state as that of knowing – and it also makes a positive claim – a claim about what we are doing when we characterize an episode or state as that of knowing. Although McDowell has not endorsed the negative claim, he has repeatedly and explicitly endorsed the positive claim, i.e., that “in characterizing an episode or a state as (...) that of knowing… we are placing it in the logical space of reasons, of justifying and being able to justify what one says.” This is what I will henceforth call “the positive Sellarsian claim”. (shrink)
In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the (...) special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. (shrink)
In some recent articles, Cobreros, Egré, Ripley, & van Rooij have defended the idea that abandoning transitivity may lead to a solution to the trouble caused by semantic paradoxes. For that purpose, they develop the Strict-Tolerant approach, which leads them to entertain a nontransitive theory of truth, where the structural rule of Cut is not generally valid. However, that Cut fails in general in the target theory of truth does not mean that there are not certain safe instances of Cut (...) involving semantic notions. In this article we intend to meet the challenge of answering how to regain all the safe instances of Cut, in the language of the theory, making essential use of a unary recovery operator. To fulfill this goal, we will work within the so-called Goodship Project, which suggests that in order to have nontrivial naïve theories it is sufficient to formulate the corresponding self-referential sentences with suitable biconditionals. Nevertheless, a secondary aim of this article is to propose a novel way to carry this project out, showing that the biconditionals in question can be totally classical. In the context of this article, these biconditionals will be essentially used in expressing the self-referential sentences and, thus, as a collateral result of our work we will prove that none of the recoveries expected of the target theory can be nontrivially achieved if self-reference is expressed through identities. (shrink)
As proposições condicionais receberam atenção teórica concentrada, embora intermitente, desde a Antiguidade. A atenção durante últimos quarenta anos tem sido intensa. Nesse artigo, apresentaremos os principais desenvolvimentos da análise lógica acerca das proposições condicionais e discutiremos como elas desempenham papel central em muitas teorias filosóficas. Na primeira parte dessas observações introdutórias, mostraremos como os antigos, principalmente as escolas megárica e estoica, envolveram-se com a questão dos condicionais e quão importante isso foi para os posteriores desenvolvimentos da análise lógica dos condicionais (...) no medievo e na Idade Moderna. Por fim, apresentaremos, já na contemporaneidade, C. I. Lewis expondo sua teoria da implicação estrita em franca oposição à doutrina da implicação que ele julgava errada. Espera-se que este trabalho sirva como a apresentação panorâmica de um campo que em lógica ainda é muito frutífero e fecundo para aprofundamentos e novas ideias. (shrink)
Samkhya is one of the oldest, if not the oldest, system of classical Indian philosophy. This book traces its history from the third or fourth century B. C. up through the twentieth century. The Encyclopedia as a whole will present the substance of the various Indian systems of thought to philosophers unable to read the Sanskrit and having difficulty in finding their way about in the translations (where such exist). This volume includes a lengthy introduction by Gerald James Larson, which (...) discusses the history of Samkhya and its philosophical contours overall. The remainder of the book includes summaries in English of all extant Sanskrit texts of the system. Originally published in 1987. The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905. (shrink)
This paper analyzes the theory of area developed by Euclid in the Elements and its modern reinterpretation in Hilbert’s influential monograph Foundations of Geometry. Particular attention is bestowed upon the role that two specific principles play in these theories, namely the famous common notion 5 and the geometrical proposition known as De Zolt’s postulate. On the one hand, we argue that an adequate elucidation of how these two principles are conceptually related in the theories of Euclid and Hilbert is highly (...) relevant for a better understanding of the respective geometrical practices. On the other hand, we claim that these conceptual relations unveil interesting issues between the two main contemporary approaches to the study of area of plane rectilinear figures, i.e., the geometrical approach consisting in the geometrical theory of equivalence and the metrical approach based on the notion of measure of area. Finally, in an appendix logical relations among equivalence, comparison and addition of magnitudes are examined schematically in an abstract setting. (shrink)
In this paper, we present a non-trivial and expressively complete paraconsistent naïve theory of truth, as a step in the route towards semantic closure. We achieve this goal by expressing self-reference with a weak procedure, that uses equivalences between expressions of the language, as opposed to a strong procedure, that uses identities. Finally, we make some remarks regarding the sense in which the theory of truth discussed has a property closely related to functional completeness, and we present a sound and (...) complete three-sided sequent calculus for this expressively rich theory. (shrink)
A questão da eutanásia e da ortotanásia suplanta os aspectos meramente jurídicos, adentrando necessariamente os campos ético, religioso, social e até mesmo econômico. A tomada de posição do Conselho Federal de Medicina acerca da questão da ortotanásia ens.
I have argued that orthographic processing cannot be understood and modeled without considering the manner in which orthographic structure represents phonological, semantic, and morphological information in a given writing system. A reading theory, therefore, must be a theory of the interaction of the reader with his/her linguistic environment. This outlines a novel approach to studying and modeling visual word recognition, an approach that focuses on the common cognitive principles involved in processing printed words across different writing systems. These claims were (...) challenged by several commentaries that contested the merits of my general theoretical agenda, the relevance of the evolution of writing systems, and the plausibility of finding commonalities in reading across orthographies. Other commentaries extended the scope of the debate by bringing into the discussion additional perspectives. My response addresses all these issues. By considering the constraints of neurobiology on modeling reading, developmental data, and a large scope of cross-linguistic evidence, I argue that front-end implementations of orthographic processing that do not stem from a comprehensive theory of the complex information conveyed by writing systems do not present a viable approach for understanding reading. The common principles by which writing systems have evolved to represent orthographic, phonological, and semantic information in a language reveal the critical distributional characteristics of orthographic structure that govern reading behavior. Models of reading should thus be learning models, primarily constrained by cross-linguistic developmental evidence that describes how the statistical properties of writing systems shape the characteristics of orthographic processing. When this approach is adopted, a universal model of reading is possible. (shrink)
The origins and development of the problem of mental causation are outlined. The underlying presuppositions which give rise to the problem are identified. Possible strategies for solving, or dissolving the problem are examined.