The purpose of this investigation is to extend earlier research on the relationship between corporate social and financial performance. The unique contribution of the study is the empirical analysis of a sample of companies from the banking industry and the use of Community Reinvestment Act ratings as a social performance measure. The empirical analysis solidly supports the hypothesis that the link between social and financial performance is positive.
In The Ethics of Postmodernity, Gary B. Madison and Marty Fairbairn have collected instructive and illuminating essays that address the dilemmas left in the wake of the postmodern attack on foundationalism. This collection is a powerful statement on the many directions a postmetaphysical ethics might take. Contributors include Barry Allen, Caroline Bayard, Robert Bernasconi, Thomas W. Busch, M.C. Dillon, Marty Fairbairn, Paul Fairfield, Morny Joy, Richard Kearney, Gary B. Madison, Joseph Margolis, Tom Rockmore, Charles E. Scott, Evan (...) class='Hi'>Simpson, and Mark Williams. (shrink)
This article examines Simpson's paradox as applied to the theory of probabilites and percentages. The author discusses possible flaws in the paradox and compares it to the Sure Thing Principle, statistical inference, causal inference and probabilistic analyses of causation.
Simpson's Paradox is introduced and analysed via the mishaps of a researcher who at first falls afoul of the traps Simpson-reversals can set, and then he learns to exploit those traps to advantage. (Note: An error in the treatment of the Sure Thing Principle is corrected in "Simpson's Paradox: A Logically Benign, Empirically Treacherous Hydra").
Purely parallel neural networks can model object recognition in brief displays – the same conditions under which illusory conjunctions have been demonstrated empirically. Correcting errors of illusory conjunction is the “tag-assignment” problem for a purely parallel processor: the problem of assigning a spatial tag to nonspatial features, feature combinations, and objects. This problem must be solved to model human object recognition over a longer time scale. Our model simulates both the parallel processes that may underlie illusory conjunctions and the serial (...) processes that may solve the tag-assignment problem in normal perception. One component of the model extracts pooled features and another provides attentional tags that correct illusory conjunctions. Our approach addresses two questions: How can objects be identified from simultaneously attended features in a parallel, distributed representation? How can the spatial selectional requirements of such an attentional process be met by a separation of pathways for spatial and nonspatial processing? Our analysis of these questions yields a neurally plausible simulation of tag assignment based on synchronizing feature processing activity in a spatial focus of attention. (shrink)
Cognitivism about trust says that it requires belief that the trusted is trustworthy; non-cognitivism denies this. At stake is how to make sense of the strong but competing intuitions that trust is an attitude that is evaluable both morally and rationally. In proposing that one's respect for another's agency may ground one's trusting beliefs, second-personal accounts provide a way to endorse both intuitions. They focus attention on the way that, in normal situations, it is the person whom I trust. My (...) task is to develop an account of the latter insight without the controversial theoretical commitments of the former. I propose a functional account for why the second and third-personal ‘systems’ operate not just in parallel, but in tandem, in support of a cognitivist account of trust. (shrink)
During human evolutionary history, there were “trade-offs” between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs (...) between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes. Key Words: conditional strategies; evolutionary psychology; fluctuating asymmetry; mating; reproductive strategies; sexual selection. (shrink)
May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...) crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
Risk management of nanotechnology is challenged by the enormous uncertainties about the risks, benefits, properties, and future direction of nanotechnology applications. Because of these uncertainties, traditional risk management principles such as acceptable risk, cost–benefit analysis, and feasibility are unworkable, as is the newest risk management principle, the precautionary principle. Yet, simply waiting for these uncertainties to be resolved before undertaking risk management efforts would not be prudent, in part because of the growing public concerns about nanotechnology driven by risk perception (...) heuristics such as affect and availability. A more reflexive, incremental, and cooperative risk management approach is required, which not only will help manage emerging risks from nanotechnology applications, but will also create a new risk management model for managing future emerging technologies. (shrink)
Trust is difficult to define. Instead of doing so, I propose that the best way to understand the concept is through a genealogical account. I show how a root notion of trust arises out of some basic features of what it is for humans to live socially, in which we rely on others to act cooperatively. I explore how this concept acquires resonances of hope and threat, and how we analogically apply this in related but different contexts. The genealogical account (...) explains both why the notion has such value for us and why it is difficult to define. (shrink)
This article develops a social epistemological analysis of Web-based search engines, addressing the following questions. First, what epistemic functions do search engines perform? Second, what dimensions of assessment are appropriate for the epistemic evaluation of search engines? Third, how well do current search engines perform on these? The article explains why they fulfil the role of a surrogate expert, and proposes three ways of assessing their utility as an epistemic tool—timeliness, authority prioritisation, and objectivity. “Personalisation” is a current trend in (...) Internet-delivered services, and consists in tailoring online content to the interests of the individual user. It is argued here that personalisation threatens the objectivity of search results. Objectivity is a public good; so there is a prima facie case for government regulation of search engines. (shrink)
Professor Engelhardt’s After God sets out in fine detail a “j’accuse” of the Western project from the medieval Scholastic doctors, through the Enlightenment, to Kant and Hegel, and finally to its telos in postmodernity, which in fact was the logical outcome of what Professor Engelhardt sees as the abuse of reason, for reason could never endure the demands made of it. I propose that Professor Engelhardt is correct in his description of our present epoch, though partially but critically misguided in (...) his diagnosis of why, and thus falls short in a prescription for the restoration of salus. (shrink)
"An essential overview of an important intellectual movement, Logical Empiricism in North America offers the first significant, sustained, and multidisciplinary attempt to understand the intellectual, cultural, and political dimensions of ...
As policy makers struggle to develop regulatory oversight models for nanotechnologies, there are important lessons that can be drawn from previous attempts to govern other emerging technologies. Five such lessons are the following: public confidence and trust in a technology and its regulatory oversight is probably the most important factor for the commercial success of a technology; regulation should avoid discriminating against particular technologies unless there is a scientifically based rationale for the disparate treatment; regulatory systems need to be flexible (...) and adaptive to rapidly changing technologies; ethical and social concerns of the public about emerging technologies need to be expressly acknowledged and addressed in regulatory oversight; and international harmonization of regulation may be beneficial in a rapidly globalizing world. (shrink)
Nanotechnology is the latest in a growing list of emerging technologies that includes nuclear technologies, genetics, reproductive biology, biotechnology, information technology, robotics, communication technologies, surveillance technologies, synthetic biology, and neuroscience. As was the case for many of the technologies that came before, a key question facing nanotechnology is what type of regulatory oversight is appropriate for this emerging technology. As two of us wrote several years ago, the question facing nanotechnology is not whether it will be regulated, but when and (...) how.Yet, appropriate regulation of nanotechnology will be challenging. The term “nanotechnology” incorporates a broad, diverse range of materials, technologies, and products, with an even greater spectrum of potential risks and benefits. This technology slashes across the jurisdiction of many existing regulatory statutes and regulatory agencies, and does so across the globe. Nanotechnology is developing at an enormously rapid rate, perhaps surpassing the capability of any potential regulatory framework to keep pace. Finally, the risks of nanotechnology remain largely unknown, both because of the multitude of variations in the technology and because of the limited applicability of traditional toxicological approaches such as structure-activity relationship to nanotechnology products. (shrink)
This book addresses two basic questions: What is the proper philosophical analysis of the concept of substance? and What kinds of compound substances are there? The second question is mainly addressed by asking what relations among objects are necessary and sufficient for their coming to compose a larger whole. The first 72 pages of the book contain a short history of attempts to answer the first question, and a brief presentation of the analysis the authors defend at length in their (...) earlier book, Substance Among Other Categories. In the remaining 119 pages, the authors take up the second question. This order of presentation makes sense; but it may help to create a false impression in those who only glance at the first few pages—that this book is just a simplified version of the earlier one, with a little bit of history thrown in. It would be quite unfortunate, however, if very many potential readers get this impression; for it might discourage them from looking closely at the bulk of the book, which is new. The issues discussed in the later chapters are at the center of one of the most lively debates in contemporary metaphysics; and the position Hoffman and Rosenkrantz stake out is appealing and carefully articulated. Their views deserve careful attention from philosophers working on the metaphysics of persistence through time, personal identity, artifact identity, and mereology. (shrink)
Let f be a computable function from finite sequences of 0ʼs and 1ʼs to real numbers. We prove that strong f-randomness implies strong f-randomness relative to a PA-degree. We also prove: if X is strongly f-random and Turing reducible to Y where Y is Martin-Löf random relative to Z, then X is strongly f-random relative to Z. In addition, we prove analogous propagation results for other notions of partial randomness, including non-K-triviality and autocomplexity. We prove that f-randomness relative to a (...) PA-degree implies strong f-randomness, hence f-randomness does not imply f-randomness relative to a PA-degree. (shrink)
In this international and interdisciplinary collection of critical essays, distinguished contributors examine a crucial premise of traditional readings of Plato's dialogues: that Plato's own doctrines and arguments can be read off the statements made in the dialogues by Socrates and other leading characters. The authors argue in general and with reference to specific dialogues, that no character should be taken to be Plato's mouthpiece. This is essential reading for students and scholars of Plato.
We have developed an argument and evidence from our experiences for the utility of 3D virtual reality systems in the interpretation of 3D geologic data. Interpretation of 3D data by geoscientists is performed in “the mind.” Visualization of 3D data in 3DVR environments is an efficient method of getting the data into the mind. Descriptions of visualization and interpretation of several different geologic data sets in 3DVR environments illustrate the advantages of 3DVR. Despite the advantages of visualization in 3DVR, several (...) reasons exist for the present limited use of 3DVR by geoscientists. With the relatively recent availability and affordability of smaller hardware and software systems, we believe 3DVR should become commonplace on the desktops of geoscience interpreters. (shrink)
Bribery is a frequently discussed problem in international business. This article looks at the problem from the North American and from the developing country perspective. It describes and analyses specific cases and highlights recurring patterns of behavior.The article is based on the experiences of the authors who have been promoting business in the developing world. In addition to ethical considerations involved with bribery there are some very practical reasons for not engaging in the practice. There are also real barriers to (...) establishing the relationships necessary to avoid the practice yet continue doing business. (shrink)
Putting robots on the battlefield is clearly appealing for policymakers. Why risk human lives, when robots could take our place, and do the dirty work of killing and dying for us? Against this, I argue that robots will be unable to win the kind of wars that we are increasingly drawn into. Modern warfare tends towards asymmetric conflict. Asymmetric warfare cannot be won without gaining the trust of the civilian population; this is ‘the hearts and minds’, in the hackneyed phrase (...) from counter-insurgency manuals. I claim that the very feature which makes it attractive to send robots to war in our place, the absence of risk, also makes it peculiarly difficult for humans to trust them. Whatever the attractions, sending robots to war in our stead will make success in counter-insurgency elusive. Moreover, there is ethical reason to be relieved at this conclusion. For if war is potentially costly, then this does much to ensure that it will be a choice only of last resort, in accordance with the traditional doctrine of jus ad bellum. In this instance, morality and expediency— fortunately— coincide. (shrink)
Trust online can be a hazardous affair; many are trustworthy, but some people use the anonymity of the web to behave very badly indeed. So how can we improve the quality of evidence for trustworthiness provided online? I focus on one of the devices we use to secure others’ trustworthiness: tracking past conduct through online reputation systems. Yet existing reputation systems face problems. I analyse these, and in the light of this develop some principles for system design, towards overcoming these (...) challenges. In providing better evidence for trustworthiness online, so we can also encourage people actually to be trustworthy more often, which is an ethically welcome outcome. (shrink)
David Pears's contention that the Tractatus is to be understood as advancing a form of metaphysical realism is defended against McGuinness's view that Tractatus 1-2.063 is to be treated just as introducing a metaphysical myth that may be employed to bring into prominence salient features of propositions. Starting with a discussion of the involved difficulties, e.g., determining whether Wittgenstein does in fact provide an argument for the existence of simple objects what this object is and what role the existence of (...) simple objects plays within the Picture Theory of the Proposition, Wittgenstein's argument for the existence of simple objects is reconstructed, augmenting Pears's existing account by providing further details of why Wittgenstein held that determinacy of sense requiresthe existence of simple objects. (shrink)
Why are people trustworthy? I argue for two theses. First, we cannot explain many socially important forms of trustworthiness solely in terms of the instrumentally rational seeking of one’s interests, in response to external sanctions or rewards. A richer psychology is required. So, second, possession of moral character is a plausible explanation of some socially important instances when people are trustworthy. I defend this conclusion against the influential account of trust as ‘encapsulated interest’, given by Russell Hardin, on which most (...) trustworthiness is explained by the interest of continuing relationship. (shrink)
Is there a justified presumption that a speaker is testifying sincerely? Anti-reductionism about testimony claims that there is, absent reasons to the contrary. Yet why believe this, given the actuality and prevalence of lies and deception? I examine one argument that may be appropriated to meet this challenge, David Lewis's claim that truthfulness is a convention. I argue that it fails, and that the supposition that there is a presumption of sincerity remains unsupported. The failure of Lewis's argument is instructive, (...) however, for it shows us a better way of approaching language use than the standard anti-reductionist treatment. As speech is an intentional action, so a presumption of the sincerity or otherwise of others' testimony must be explicable in the terms we normally use to explain action. (shrink)
In this paper we provide evidence from Japan that bears on a general theory of agenda power in legislatures. By agenda power we mean the power to determine: (a) which bills are considered in the plenary session of the legislature and (b) restrictions on debate and amendment to these bills, when they are considered. While a substantial amount of work has focused on the second category of agenda power, including studies of special rules in the US House (e.g., Sinclair forthcoming), (...) closure in the UK House of Commons (e.g., Cox, 1987; Dion, 1997), and the guillotine in the French National Assembly (e.g., Huber, 1996), there is very little on the first and arguably more fundamental sort of agenda power. This agenda power is our focus here, and henceforth when we refer to we shall mean this narrower conception. (shrink)