Monogenesis of language is widely accepted, but the conventional argument seems to be mistaken; a simple probabilistic model shows that polygenesis is likely. Other prehistoric inventions are discussed, as are problems in tracing linguistic lineages. Language is a system of representations; within such a system, words can evoke complex and systematic responses. Along with its social functions, language is important to humans as a mental instrument. Indeed, the invention of language,that is the accumulation of symbols to represent emotions, objects, and (...) acts may be the most important event in human evolution, because so many developments follow from it. For example, Edward Sapir speculated that some embryonic form of language must have been available to early man to help him fashion tools from stone (Sapir,1921). Sophisticated biface stone tools date to early Homo erectus some 1.5 million years ago, suggesting a similar age for language. This paper considers whether the invention of language occurred at only one pre-historic site or at several sites. In other words, did language emerge by monogenesis or polygenesis? Early thinkers believed in monogenesis, against a background of divine creation. Perhaps the best known account is the biblical story of Adam giving names to plants and animals in the Garden of Eden. Similar legends are found among many peoples. Modern linguists too assume monogenesis, but on probabilistic grounds (see, for instance, Southworth and Daswani, 1974, p.314). The argument seems to be that the invention of language is an extremely unlikely event, because symbolization involves abstraction and requires synchronized insight by several individuals; therefore, the probability of occurrence at more than one site must be vanishingly small. We have found no explicit quantitative treatment of this question in the literature, but the underlying logic has to be the multiplication of probabilities. If p is small at one site,then p.p for two sites is smaller still, and so on. This reasoning is false, as we show here. The fallacy lies in the focus on two particular sites rather than consideration of all pairs of sites. (shrink)
There have been many efforts to infer causation from association byusing statistical models. Algorithms for automating this processare a more recent innovation. In Humphreys and Freedman[(1996) British Journal for the Philosophy of Science 47, 113–123] we showed that one such approach, by Spirtes et al., was fatally flawed. Here we put our arguments in a broader context and reply to Korb and Wallace [(1997) British Journal for thePhilosophy of Science 48, 543–553] and to Spirtes et al.[(1997) British Journal for (...) the Philosophy of Science 48, 555–568]. Their arguments leave our position unchanged: claims to have developed a rigorous engine for inferring causation from association are premature at best, the theorems have no implications for samples of any realistic size, and the examples used to illustrate the algorithms are indicative of failure rather than success. The gap between association and causation has yet to be bridged. (shrink)
Without significant intervention, demand for crude oil could rise by a further 25% by 2035, stemming from its use for transportation, particularly road transport. Many technologies for alternative fuels and substitute transport energy carriers are being researched, but successful implementation of these technologies at scale will require attention to consumer-behavioural and policy challenges as well as adapting existing or introducing new commercial value chains. In particular, there will be new capital-intensive roles for which there are no obvious contenders as yet. (...) The legacy of diverse urban planning and fuel taxation policies and varying degrees of consumer inertia will lead to different rates of adoption of different alternative technologies in regional markets. In the absence of technology that provides a compelling consumer proposition, substitution of crude demand in mature markets will be challenging, as will be channelling exponential growth from growing markets like China into less crude-intensive road transport solutions. (shrink)
After sketching the conflict between objectivists and subjectivists on the foundations of statistics, this paper discusses an issue facing statisticians of both schools, namely, model validation. Statistical models originate in the study of games of chance, and have been successfully applied in the physical and life sciences. However, there are basic problems in applying the models to social phenomena; some of the difficulties will be pointed out. Hooke's law will be contrasted with regression models for salary discrimination, the latter being (...) a fairly typical application in the social sciences. (shrink)
This essay explores the epistemological significance of the kinds of beliefs that grow out of traumatic experiences, such as the rape survivor's belief that she is never safe. On current theories of justification, beliefs like this one are generally dismissed due to either insufficient evidence or insufficient propositional content. Here, Freedman distinguishes two discrete sides of the aftermath of psychic trauma, the shattered self and the shattered worldview. This move enables us to see these beliefs as beliefs; in other (...) words, as having cognitive content. Freedman argues that what we then need is a theory of justification that allows us to handpick reliable sources of information on sexual violence, and give credibility where deemed appropriate. She advances a mix of reliabilism and coherentism that privileges feminism. On this account, the evidence for the class of beliefs in question will depend on an act of sexual violence (or testimony, or statistics) to the extent that the act is a reliable indication of the prevalence of sexual violence against women. (shrink)
Looking at specific populations of knowers reveals that the presumption of sameness within knowledge communities can lead to a number of epistemological oversights. A good example of this is found in the case of survivors of sexual violence. In this paper I argue that this case study offers a new perspective on the debate between the epistemic internalist and externalist by providing us with a fresh insight into the complicated psychological dimensions of belief formation and the implications that this has (...) for an epistemology that demands reasons that are first-person accessible. (shrink)
Duty and Healing positions ethical issues commonly encountered in clinical situations within Jewish law. The concept of duty is significant in exploring bioethical issues, and this book presents an authentic and non-parochial Jewish approach to bioethics, while it includes critiques of both current secular and Jewish literatures. Among the issues the book explores are the role of family in medical decision-making, the question of informed consent as a personal religious duty, and the responsibilities of caretakers. The exploration of contemporary ethical (...) problems in healthcare through the lens of traditional sources in Jewish law is an indispensable guide of moral knowledge. (shrink)
The old literature on whether medical confidentiality may be breached to warn a spouse of a risk of contracting syphilis from his/her partner — a deep and rich literature — has become relevant once again in the context of HIV infection and AIDS. This paper examines the reasoning and method employed in: the Catholic approach centered around the patient's (property) right to the secret; a (generic) model of justice, utilizing minimal principles of non-aggression and restitution; and an approach involving the (...) elimination of unstable alternatives: the view that public health officials, but not the spouse, may/must be notified; and, that maintaining that the physician is at liberty to disclose but is not obliged to do so. The theory and method behind confidentiality turns out to be deeper than you might have anticipated. (shrink)
OBJECTIVE: To investigate whether eligibility criteria that exclude the elderly, persons with psychiatric disease, and persons with substance abuse problems from participation in randomized controlled trials (RCTs) are subjective and hence a source of variability in enrolment decisions and investigator uncertainty. DESIGN: Survey questionnaire. PARTICIPANTS: Cancer investigators from the United States and Canada. INTERVENTIONS: Investigators were presented with clinical vignettes from 3 patient categories--eligible, ineligible and uncertain--for each of 5 eligibility criteria--3 subjective and 2 objective--and were asked whether they would (...) enrol the patient in a trial and how sure they were of this decision. Demographic characteristics of the investigators were also collected. OUTCOME MEASURES: The difference in enrolment decisions between subjective and objective criteria, and the difference in the certainty associated with these decisions. RESULTS: Of 365 questionnaires sent out, 224 usable ones were returned. Compared with the objective criteria, the subjective criteria were associated with more variable enrolment decisions (p = 0.07 for the "eligible" scenario and p = 0.0001 for the "ineligible" and "uncertain" scenarios), and investigators were less sure about the decisions they made (p = 0.0001 for all scenarios). Demographic characteristics of the investigators failed to explain the observed differences. CONCLUSIONS: Subjective eligibility criteria may interfere with the conduct and interpretation of RCTs and, therefore, their use ought to be justified explicitly in the study protocol. RCT designers, funding agencies and research ethics boards have an important role in reviewing eligibility criteria for their necessity. (shrink)
OBJECTIVE: Pilot study to characterize treatment differences between patients treated in clinical trials and those treated in a clinical setting. Previous studies have shown higher survival rates for participants in trials of cancer therapy. This difference is observed even after rates are adjusted for important covariates such as age and stage of disease. DESIGN: Retrospective chart review. SETTING: Oncology outpatient department in a tertiary care hospital. PATIENTS: Ninety women 18 to 70 years of age with early-stage breast cancer who were (...) diagnosed in 1990. Fifty-one of the women were treated through clinical trials and 39 were treated outside of clinical trials. OUTCOME MEASURES: Number of blood tests, telephone calls, clinic visits and imaging procedures as well as intensity of chemotherapy and use of radiation therapy. The age of the patient and the stage of disease were important covariates. RESULTS: After the analysis was controlled for patient age and stage of disease, patients treated through a clinical trial were more likely to receive standard-dose chemotherapy (p = 0.020, 95% confidence interval 1.20 to 200.73) and more frequent blood tests (p < 0.001, 95% confidence interval 1.02 to 1.13) than other patients treated in the clinic. CONCLUSIONS: Our results provide a plausible mechanism for the observed survival advantage for participants in clinical trials in oncology. Further study is called for. If these results are confirmed, they have important implications for informed consent to participate in clinical trials and for clinical practice. (shrink)
We studied changes in eligibility criteria--the largest impediment to patient accrual--in two samples of clinical trials. Trials from the NSABP (National Surgical Adjuvant Breast and Bowel Program) and POG (Pediatric Oncology Group) were analyzed. After eliminating duplications, the criteria in each protocol were enumerated and classified according to a novel schema. NSABP trials contained significantly more criteria than POG trials, and added precision criteria (making study populations homogeneous) at a faster rate than POG studies. The difference between NSABP studies (explanatory (...) trials) and POG studies (pragmatic trials) suggest that large numbers of eligibility criteria are not necessary for quality studies. We recommend that: (1) the inclusion/exclusion criteria distinction be abandoned; (2) eligibility criteria be explicitly justified; (3) the need for each criterion be assessed when new trials are planned; (4) criteria in phase III trials restricting patient accrual be minimized; and (5) further research be done to assess the impact of criteria on generalizability. (shrink)
This research concerns accountability by the U.S. electric utility industry for the financial impacts of cap-and-trade emissions allowance activity. We report findings from an extensive examination of disclosure practices for more than 100 facilities that were required to curb pollutant discharges and participate in a government-mandated program of emission allowance distribution and trading.
Part of Keynes? 'struggle of escape from habitual modes of thought and expression' (Keynes 1960: viii) involves an implicit attempt to break with the methodology as well as the theory of the past. Unfortunately the rhetorical strategy Keynes adopted in The General Theoryblurred this attempt. As a result, it is only by examining both the methodology and rhetoric embedded in this work that it becomes possible to understand the book as a coherent whole. This paper demonstrates the validity of taking (...) such an approach. (shrink)
This paper is an attempt to achieve a moral understanding of recombinant DNA technology through an examination of the Biblical ban on the cross-breeding of species, as that ban was understood by traditional Jewish commentators. By paying close attention to the concept of natural law which some of those commentators employed in this connection, a nuanced response to the modern moral problem can be developed, which is immune to the standard arguments employed against those who rely upon natural law.
My favorite opponent in this debate once made a remarkable concession, not that it interfered with business as usual:No sensible social scientist believes any particular specification, coefficient estimate, or standard error. Social science theories ... imply that specifications and parameters constant over situations do not exist ... One searches for qualitative theory ... not for quantitative specifications Achen (1987, p.149). . With Hooke's law and the like, we are estimating parameters in specifications that are constant across time—at least to a (...) very good degree of approximation But see Cartwright (1983). . What are the social scientists doing when they estimate non-existent parameters, and put standard errors on the output? How can that help them search for qualitative theory? Those are among the first of my questions, and I never get answers. (shrink)
In this paper I argue that the Strong Programme's aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized-even relativized-they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a (...) belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle's emphasis on 'sameness of type' is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of 'similarity' is to have any content, then we are not going to classify as 'the same' beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them. (shrink)
In previous work, I defended Larry Laudan against the criticism that the axiological component of his normative naturalism lacks a naturalistic justification. I argued that this criticism depends on an equivocation over the term 'naturalism' and that it begs the question against what we are entitled to include in our concept of nature. In this paper, I generalize that argument and explore its implications for Laudan and other proponents of epistemic naturalism. Here, I argue that a commitment to naturalism in (...) the methods and aims of science inevitably entails a kind of epistemic relativism. However, I argue that this should not be interpreted as a reductio of naturalism, since the admission of contextually based standards and aims of science does not result in quietism when it comes to important questions concerning scientific rationality. (shrink)
Doppelt (1986,1990), Siegel (1990), and Rosenberg (1996) argue that the pivotal feature of Laudan's normative naturalism, namely his axiology, lacks a naturalistic foundation. In this paper I show that this objection turns on a misunderstanding of Laudan's use of the term 'naturalism'. Specifically, I argue that there are two important senses of naturalism running through Laudan's work. Once these two strands are made explicit, the objection raised by Doppelt and others simply disappears.
Helen Longino argues that the way to ensure scientific knowledge is objective is to have a diversity of scientific investigators. This is the best example of recent feminist arguments which hold that the real value of diversity is epistemic, and not political, but it only partly succeeds. In the end, Longino's objectivity amounts to intersubjective agreement about contextually based standards, and while her account gives us a good reason for wanting diversity in our scientific communities, this reason turns out to (...) be political. (shrink)
The authors find it more useful to payattention to relationships than to boundaries.By focusing attention on bounded, individualpsychological issues, the metaphor ofboundaries can distract helping professionalsfrom thinking about inequities of power. Itoversimplifies a complex issue, inviting us toignore discourses around gender, race, class,culture, and the like that support injustice,abuse, and exploitation. Making boundaries acentral metaphor for ethical practice can keepus from critically examining the effects ofdistance, withdrawal, and non-participation.The authors describe how it is possible toexamine the practical, moral, and ethicaleffects (...) of our participation in relationshipsby focusing on just relationships rather thanon boundaries. They give illustrations andclinical examples of relationally-focusedethical practices that derive from a narrativeapproach to therapy. (shrink)
A common difficulty with the application of theories of justice to the allocation of medical resources is the assumption that one perspective is primary, whether that privileged perspective be that of the practitioner, on the one hand, or policy analyst on the other. By a discussion of three theories — those of Ramsey, Childress, and Joseph Fletcher — I attempt to show that these perspectives must be treated as related. As a result, values and ethics expressed in micro-allocation should be (...) uniform with those of macro-allocation. This point has implications for such issues as substance and procedure in just decision, the role of the political process in medical allocation, and the definition of health and health services. (shrink)
A series of studies investigated the capacity of children between the ages of 7 and 12 to give free and informed consent to participation in psychological research. Children were reasonably accurate in describing the purpose of studies, but many did not understand the possible benefits or especially the possible risks of participating. In several studies children's consent was not affected by the knowledge that their parents had given their permission or by the parents saying that they would not be upset (...) if the children refused. In contrast, other studies found that children were much more likely to stop their participation if the experimenter said explicitly that she would not be upset if they stopped. We suggest that experimenters should pay more attention to describing the possible risks and benefits of participation in research, and that they should also make it clearer to children that they are free to stop once they have begun. (shrink)
I argue that Cheryl Misak, in her Deflating Truth: Pragmatism vs. Minimalism, puts forth a pragmatist theory of truth that is deflationary in spirit but goes beyond the triviality of the equivalence schema. Furthermore, Misak's criticism of disquotationalism is systemic of a larger problem with her pragmatist theory of truth, namely her desire to explicate justification in terms of truth. She is right that people's assorteric practices involve defending their beliefs to one another, but she is wrong to think that (...) people have to reinflate truth to make sense of this. (shrink)
In summary, the usual elements of a typical health care ethics consultation note might reasonably accommodate the needs and expectations of relevant parties, and would therefore include: 1. identification of the relevant ethical issues, questions, or dilemmas; 2. reference to any relevant facts--medical, nursing, social, psychological, spiritual, legal, political, etc.; 3. a prioritized list of recommendations to improve coordinated care; 4. a clear and concise articulation of relevant arguments, wtih specific reference to the list of recommendations as well as to (...) the institution's overall ethos; 5. a contextual statement, identifying the perceived degree of consensus or support for the recommendations and conclusions, as well as any inherent agendas. (shrink)