Counterfactual thinking involves imagining hypothetical alternatives to reality. Philosopher David Lewis argued that people estimate the subjective plausibility that a counterfactual event might have occurred by comparing an imagined possible world in which the counterfactual statement is true against the current, actual world in which the counterfactual statement is false. Accordingly, counterfactuals considered to be true in possible worlds comparatively more similar to ours are judged as more plausible than counterfactuals deemed true in possible worlds comparatively less similar. Although (...) Lewis did not originally develop his notion of comparative similarity to be investigated as a psychological construct, this study builds upon his idea to empirically investigate comparative similarity as a possible psychological strategy for evaluating the perceived plausibility of counterfactual events. More specifically, we evaluate judgments of comparative similarity between episodic memories and episodic counterfactual events as a factor influencing people's judgments of plausibility in counterfactual simulations, and we also compare it against other factors thought to influence judgments of counterfactual plausibility, such as ease of simulation and prior simulation. Our results suggest that the greater the perceived similarity between the original memory and the episodic counterfactual event, the greater the perceived plausibility that the counterfactual event might have occurred. While similarity between actual and counterfactual events, ease of imagining, and prior simulation of the counterfactual event were all significantly related to counterfactual plausibility, comparative similarity best captured the variance in ratings of counterfactual plausibility. Implications for existing theories on the determinants of counterfactual plausibility are discussed. (shrink)
The national-level scenarios project NanoFutures focuses on the social, political, economic, and ethical implications of nanotechnology, and is initiated by the Center for Nanotechnology in Society at Arizona State University (CNS-ASU). The project involves novel methods for the development of plausible visions of nanotechnology-enabled futures, elucidates public preferences for various alternatives, and, using such preferences, helps refine future visions for research and outreach. In doing so, the NanoFutures project aims to address a central question: how to deliberate the social implications (...) of an emergent technology whose outcomes are not known. The solution pursued by the NanoFutures project is twofold. First, NanoFutures limits speculation about the technology to plausible visions. This ambition introduces a host of concerns about the limits of prediction, the nature of plausibility, and how to establish plausibility. Second, it subjects these visions to democratic assessment by a range of stakeholders, thus raising methodological questions as to who are relevant stakeholders and how to activate different communities so as to engage the far future. This article makes the dilemmas posed by decisions about such methodological issues transparent and therefore articulates the role of plausibility in anticipatory governance. (shrink)
Abstract. Richard Feldman’s Uniqueness Thesis holds that “a body of evidence justifies at most one proposition out of a competing set of proposi- tions”. The opposing position, permissivism, allows distinct rational agents to adopt differing attitudes towards a proposition given the same body of evidence. We assess various motivations that have been offered for Uniqueness, including: concerns about achieving consensus, a strong form of evidentialism, worries about epistemically arbitrary influences on belief, a focus on truth-conduciveness, and consequences for peer disagreement. (...) We argue that each of these motivations either misunderstands the commitments of permissivism or is question-begging. Better understanding permissivism makes it a much more plausible position. (shrink)
In this paper we argue that in recent literature on mechanistic explanations, authors tend to conflate two distinct features that mechanistic models can have or fail to have: plausibility and richness. By plausibility, we mean the probability that a model is correct in the assertions it makes regarding the parts and operations of the mechanism, i.e., that the model is correct as a description of the actual mechanism. By richness, we mean the amount of detail the model gives (...) about the actual mechanism. First, we argue that there is at least a conceptual reason to keep these two features distinct, since they can vary independently from each other: models can be highly plausible while providing almost no details, while they can also be highly detailed but plainly wrong. Next, focusing on Craver's continuum of ?how-possibly,? to ?how-plausibly,? to ?how-actually? models, we argue that the conflation of plausibility and richness is harmful to the discussion because it leads to the view that both are necessary for a model to have explanatory power, while in fact, richness is only so with respect to a mechanism's activities, not its entities. This point is illustrated with two examples of functional models. (shrink)
In this chapter I defend a methodological view about how we should conduct substantive ethical inquiries in the fields of normative and practical ethics. I maintain that the direct plausibility and implausibility of general ethical principles – once fully clarified and understood – should be foundational in our substantive ethical reasoning. I argue that, in order to expose our ethical intuitions about particular cases to maximal critical scrutiny, we must determine whether they can be justified by directly plausible principles. (...) To expose apparently plausible principles to maximal critical scrutiny, we must determine whether their direct plausibility can survive careful clarification of what they are really saying. This means that intuitions about cases are useful only in (a) suggesting principles that must stand on their own two feet, and (b) illustrating or otherwise helping us clarify what a principle is really saying. We should not reject principles that seem most directly plausible after we have fully clarified their content simply because they conflict with our intuitions about cases, because to do so is to side with uncritical prejudices over the teachings of critical scrutiny. (shrink)
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony, “mesh” binding, and conjunctive binding. Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture approach (...) to modeling cognition do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. (shrink)
Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...) that a deeper understanding of cancer can be achieved by means of more theoretical work, rather than by merely accumulating more data. To support our thesis, we introduce the analytic view of theory development, which rests on the concept of plausibility, and make clear in what sense plausibility and probability are distinct concepts. Then, the concept of plausibility is used to point out the ineliminable role played by the epistemic subject in the development of statistical tools and in the process of theory assessment. We then move to address a central issue in cancer research, namely the relevance of computational tools developed by bioinformaticists to detect driver mutations in the debate between the two main rival theories of carcinogenesis. Finally, we briefly extend our considerations on the role that plausibility plays in evidence amalgamation from cancer research to the more general issue of the divergences between frequentists and Bayesians in the philosophy of medicine and statistics. We argue that taking into account plausibility-based considerations can lead to clarify some epistemological shortcomings that afflict both these perspectives. (shrink)
2014 Reprint of 1954 American Edition. Full facsimile of the original edition, not reproduced with Optical Recognition Software. This two volume classic comprises two titles: "Patterns of Plausible Inference" and "Induction and Analogy in Mathematics." This is a guide to the practical art of plausible reasoning, particularly in mathematics, but also in every field of human activity. Using mathematics as the example par excellence, Polya shows how even the most rigorous deductive discipline is heavily dependent on techniques of guessing, inductive (...) reasoning, and reasoning by analogy. In solving a problem, the answer must be guessed at before a proof can be given, and guesses are usually made from a knowledge of facts, experience, and hunches. The truly creative mathematician must be a good guesser first and a good prover afterward; many important theorems have been guessed but no proved until much later. In the same way, solutions to problems can be guessed, and a god guesser is much more likely to find a correct solution. This work might have been called "How to Become a Good Guesser."-From the Dust Jacket. (shrink)
For those who maintain that free will is incompatible with causal determinism, a persistent problem is to give a coherent characterization of action that is neither determined by prior events nor random, arbitrary, lucky or in some way insufficiently under the control of the agent to count as free action. One approach—that of Roderick Chisholm and others—is to say that a third alternative is for an action to be caused by an agent in a way that is not reducible to (...) event causal terms. A different approach than the Chisholmian appeal to primitive substance causation is one that, instead, involves causal relations purely among events. This paper presents a particular event-causal indeterminist account of free action, describing both its attractions and recent objections to it, and then proposes a revised version, with the aim of supporting the plausibility of an event-causal indeterminist approach to free will. (shrink)
In this paper it is shown how plausible reasoning of the kind illustrated in the ancient Greek example of the weak and strong man can be analyzed and evaluated using a procedure in which the pro evidence is weighed against the con evidence using formal, computational argumentation tools. It is shown by means of this famous example how plausible reasoning is based on an audience’s recognition of situations of a type they are familiar with as normal and comprehensible in their (...) shared common knowledge. The paper extends previous work on this example by using three new multiagent argumentation schemes closely related to the scheme for argument from negative consequences. (shrink)
Possibilities haunt history. The force of our explanations of events turns on the alternative possibilities these explanations suggest. It is these possible worlds which give us our understanding; and in human affairs we decide them by practical rather than theoretical judgement. In his widely acclaimed account of the role of counterfactuals in explanation, Geoffrey Hawthorn deploys extended examples from history and modern times to defend his argument. His conclusions cast doubt on existing assumptions about the nature and place of theory, (...) and indeed of the possibility of knowledge itself, in the human sciences. (shrink)
There seems to be something wrong with passing moralistic judgments on others’ moral character. Immanuel Kant’s ethics provides insight into an underexplored way in which moralistic judgments are problematic, namely, that they are both a sign of fundamentally poor character in the moralistic person herself and an obstacle to that person’s own moral self-improvement. Kant’s positions on these issues provide a basically compelling argument against moralistic judgment of others, an argument that can be detached from the most controversial elements of (...) Kantian ethics to stand as plausible and instructive in its own right. (shrink)
Cognitivism in psychology and philosophy is roughly the position that intelligent behavior can (only) be explained by appeal to internal that is, rational thought in a very broad sense. Sections 1 to 5 attempt to explicate in detail the nature of the scientific enterprise that this intuition has inspired. That enterprise is distinctive in at least three ways: It relies on a style of explanation which is different from that of mathematical physics, in such a way that it is not (...) basically concerned with quantitative equational laws; the states and processes with which it deals are in the sense that they are regarded as meaningful or representational; and it is not committed to reductionism, but is open to reduction in a form different from that encountered in other sciences. Spelling these points out makes it clear that the Cognitivist study of the mind can be rigorous and empirical, despite its unprecedented theoretical form. The philosophical explication has another advantage as well: It provides a much needed framework for articulating questions about whether the Cognitivist approach is right or wrong. The last three sections take that advantage of the account, and address several such questions, pro and con. (shrink)
Several alternatives vie today for recognition as the most plausible ontology, from physicalism to panpsychism. By and large, these ontologies entail that physical structures circumscribe consciousness by bearing phenomenal properties within their physical boundaries. The ontology of idealism, on the other hand, entails that all physical structures are circumscribed by consciousness in that they exist solely as phenomenality in the first place. Unlike the other alternatives, however, idealism is often considered implausible today, particularly by analytic philosophers. A reason for this (...) is the strong intuition that an objective world transcending phenomenality is a self-evident fact. Other arguments—such as the dependency of phenomenal experience on brain function, the evidence for the existence of the universe before the origin of conscious life, etc.—are also often cited. In this essay, I will argue that these objections against the plausibility of idealism are false. As such, this essay seeks to show that idealism is an entirely plausible ontology. (shrink)
Many weaknesses of game theory are cured by new models that embody simple cognitive principles, while maintaining the formalism and generality that makes game theory useful. Social preference models can generate team reasoning by combining reciprocation and correlated equilibrium. Models of limited iterated thinking explain data better than equilibrium models do; and they self-repair problems of implausibility and multiplicity of equilibria.
Quine’s thesis of underdetermination is significantly weaker than it has been taken to be in the recent literature, for the following reasons: (i) it does not hold for all theories, but only for some global theories, (ii) it does not require the existence of empirically equivalent yet logically incompatible theories, (iii) it does not rule out the possibility that all perceived rivalry between empirically equivalent theories might be merely apparent and eliminable through translation, (iv) it is not a fundamental thesis (...) within Quine’s philosophy, and (v) it does not carry with it the anti-realistic consequences often associated with the thesis in recent debates. The paper analyzes Quine’s views on the matter and the changes they underwent over the years. A conjecture is put forth about why Quine’s thesis has been so widely misrepresented: Quine’s writings up to 1975 tackled primarily the formulation and justification of the thesis, but afterwards were concerned mostly with the question whether empirically equivalent rivals to the theory we hold are to be considered true also. When this latter discussion is read without bearing in mind Quine’s earlier formulation and justification of the thesis, his thesis seems to have stronger epistemic consequences than it actually does. A careful reading of his later writings shows, however, that the formulation of the thesis remained unchanged after 1975, and that his mature and considered views supported only a very mitigated version of the thesis. (shrink)
Here the author of How to Solve It explains how to become a "good guesser." Marked by G. Polya's simple, energetic prose and use of clever examples from a wide range of human activities, this two-volume work explores techniques of guessing, inductive reasoning, and reasoning by analogy, and the role they play in the most rigorous of deductive disciplines.
In this article, a qualitative notion of subjective plausibility and its revision based on a preorder relation are implemented in higher-order logic. This notion of plausibility is used for modeling pragmatic aspects of communication on top of traditional two-dimensional semantic representations.
Halvorson argues that the semantic view of theories leads to absurdities. Glymour shows how to inoculate the semantic view against Halvorson's criticisms, namely by making it into a syntactic view of theories. I argue that this modified semantic-syntactic view cannot do the philosophical work that the original "language-free" semantic view was supposed to do.
Current practice in logic increasingly accords recognition to abductive, presumptive or plausible arguments, in addition to deductive and inductive arguments. But there is uncertainty about what these terms exactly mean, what the differences between them are (if any), and how they relate. By examining some analyses ofthese terms and some of the history of the subject (including the views of Peirce and Cameades), this paper sets out considerations leading to a set of definitions, discusses the relationship of these three forms (...) of argument to argumentation schemes and sets out a new argumentation scheme for abductive argument. (shrink)
If a catalogue were made of terms commonly used to affirm the adequacy of critical interpretations of works of art, one word certain to be included would be “plausible.” Yet this term is one which has received precious little attention in the literature of aesthetics. This is odd, inasmuch as I find the notion of plausibility central to an understanding of the nature of criticism. “Plausible” is a perplexing term because it can have radically different meanings depending on the (...) circumstances of its employment. In the following discussion, I will make some observations about the logic of this concept in connection with its uses in two rather different contexts: the context of scientific inquiry on the one hand, and that of aesthetic interpretation on the other. In distinguishing separate senses of “plausible,” I shall provide reasons to resist the temptation to imagine that because logical aspects of two different types of inquiry, science and criticism, happen to be designated by the same term, they may to that extent be considered to have similar logical structures. (shrink)
From antiquity several philosophers have claimed that the goal of natural science is truth. In particular, this is a basic tenet of contemporary scientific realism. However, all concepts of truth that have been put forward are inadequate to modern science because they do not provide a criterion of truth. This means that we will generally be unable to recognize a scientific truth when we reach it. As an alternative, this paper argues that the goal of natural science is plausibility (...) and considers some characters of plausibility. (shrink)
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. (...) We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. (shrink)
In their unifying theory to model uncertainty, Friedman and Halpern (1995–2003) applied plausibility measures to default reasoning satisfying certain sets of axioms. They proposed a distinctive condition for plausibility measures that characterizes “qualitative” reasoning (as contrasted with probabilistic reasoning). A similar and similarly fundamental, but more general and thus stronger condition was independently suggested in the context of “basic” entrenchment-based belief revision by Rott (1996–2003). The present paper analyzes the relation between the two approaches to formalizing basic notions (...) of plausibility as used in qualitative default reasoning. While neither approach is a special case of the other, translations can be found that elucidate their relationship. I argue that Rott’s notion of plausibility allows for a more modular set-up and has a better philosophical motivation than that of Friedman and Halpern. (shrink)
In his important recent book Schroeder proposes a Humean theory of reasons that he calls hypotheticalism. His rigourous account of the weight of reasons is crucial to his theory, both as an element of the theory and constituting his defence to powerful standard objections to Humean theories of reasons. In this paper I examine that rigourous account and show it to face problems of vacuity and consonance. There are technical resources that may be brought to bear on the problem of (...) vacuity but implementation is not simple and philosophical motivation a further difficulty. Even supposing vacuity is fixed, the problems of consonance bring to light a different obstruction lying in Schroeder’s path. There is a difference between the general weighing of reasons and the context specificity of the correct placing of weight on them in deliberation and this difference cannot be fixed by the resources in the account. For these reasons we are still waiting for a plausible Humean theory of reasons. (shrink)
A fundamental claim associated with parallel distributed processing theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts, that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned (...) in connectionist models and neural coding in brain and often dismiss localist theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models. (shrink)
Fred Feldman's fascinating new book sets out to defend hedonism as a theory about the Good Life. He tries to show that, when carefully and charitably interpreted, certain forms of hedonism yield plausible evaluations of human lives. Feldman begins by explaining the question about the Good Life. As he understands it, the question is not about the morally good life or about the beneficial life. Rather, the question concerns the general features of the life that is good in itself for (...) the one who lives it. Hedonism says (roughly) that the Good Life is the pleasant life. After showing that received formulations of hedonism are often confused or incoherent, Feldman presents a simple, clear, coherent form of sensory hedonism that provides a starting point for discussion. He then presents a catalogue of classic objections to hedonism, coming from sources as diverse as Plato, Aristotle, Brentano, Ross, Moore, Rawls, Kagan, Nozick, Brandt, and others. One of Feldman's central themes is that there is an important distinction between the forms of hedonism that emphasize sensory pleasure and those that emphasize attitudinal pleasure. Feldman formulates several kinds of hedonism based on the idea that attitudinal pleasure is the Good. He claims that attitudinal forms of hedonism - which have often been ignored in the literature -- are worthy of more careful attention. Another main theme of the book is the plasticity of hedonism. Hedonism comes in many forms. Attitudinal hedonism is especially receptive to variations and modifications. Feldman illustrates this plasticity by formulating several variants of attitudinal hedonism and showing how they evade some of the objections. He also shows how it is possible to develop forms of hedonism that are equivalent to the allegedly anti-hedonistic theory of G. E. Moore and the Aristotelian theory according to which the Good Life is the life of virtue, or flourishing. He also formulates hedonisms relevantly like the ones defended by Aristippus and Mill. Feldman argues that a carefully developed form of attitudinal hedonism is not refuted by objections concerning 'the shape of a life'. He also defends the claim that all of the alleged forms of hedonism discussed in the book genuinely deserve to be called 'hedonism'. Finally, after dealing with the last of the objections, he gives a sketch of his hedonistic vision of the Good Life. (shrink)
Así como Hume despertó a Kant del sueño dogmático y Rousseau le descubrió el mundo moral, Diderot le habría hecho acceder al universo de la política. El último Kant podría estar muy influido, sin saberlo, por el Diderot que convirtió la Historia de las dos Indias en la "Biblia de las revoluciones", así como por el de la Enciclopedia. Se defiende la tesis de que todo el proyecto ilustrado tendría una índole radical, y conviene matizar la distinción entre Ilustración moderada (...) e Ilustración radical, toda vez que los puntos en común parecen superar a las discrepancias. Spinoza y Heine podrían resultar útiles para enfatizar este parecer. (shrink)
This paper addresses the two extensional objections to the Humean Theory of Reasons—that it allows for too many reasons, and that it allows for too few. Although I won’t argue so here, manyof the other objections to the Humean Theoryof Reasons turn on assuming that it cannot successfully deal with these two objections.1 What I will argue, is that the force of the too many and the too few objections to the Humean Theorydepend on whether we assume that Humeans are (...) committed to a thesis about the weight of reasons—one I call Proportionalism. In particular, I’ll show how a version of the Humean Theorythat rejects Proportionalism can reasonablyhope to escape both the too many and the too few objections. This will constitute my defense of this version of the Humean Theory. But then, separately, I will argue that this defense of the Humean Theoryis not ad hoc. I’ll argue that Humeans have no reason to accept Proportionalism in the first place. Or at least, no weightyone. There are three parts to the paper. In Part 1 we introduce the Humean Theoryand the too few reasons objection. I’ll first layout the objection, and then layout the basis for a response on behalf of myfavored version of the Humean Theory. There will be an obvious objection to my defense— but it will turn out to depend on the assumption of Proportionalism. This will constitute myargument that the susceptibilityof the Humean Theoryto.. (shrink)
The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of (...) this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change. (shrink)
Although, when first introduced, Copernicus's theory considered as a whole was not superior to the Ptolemaic theory according to any of the usual criteria for comparing theories and determining their acceptability, it did have features which provided the early Copernicans with good reasons for entertaining it and trying to develop it further. These features are discussed and then three plausibility considerations which seem to be operative in this case are formulated.
According to the instrumentalism of Friedman and Machlup it is irrelevant whether the explanatory principles or “assumptions” of a theory satisfy any criterion of “plausibility,” “realism,” “credibility,” or “soundness.” In this view the main or only criterion for selecting theories is whether a theory yields empirically testable implications that turn out to be consistent with observations. All we should require or expect from a theory is that it is a useful instrument for the purpose of prediction. Considerations of the (...) “efficiency” of a theory for the purpose of ordering our experiences are permitted, but considerations of “plausibility” are not. “Explanatory assumptions” are not really explanatory in the sense that they claim to represent underlying causal processes in reality; they only serve to generate, by deduction, implications that are in accordance with as many observations as possible. (shrink)
This article evaluates whether Rescher's rules for plausible reasoning or other rules used in artificial intelligence for "confidence factors" can be extended to deal with arguments where the linked-convergent distinction is important.
The problem adressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen 1983, 27).
This paper defends some aspects of the intentionalist and internationalist worldviews of mainstream development studies against certain moral claims emanating from the New Right and a diverse post-Left. I contend that citizens and states in the advanced industrial world have a responsibility to attend to the claims of distant strangers. Although it is difficult to specify in determinate ways how this responsibility should be discharged—save for attending to basic human needs and rights—the responsibility itself derives from the interlinking and asymmetrical (...) exchanges that bind distant strangers together in an interdependent world economy. I draw on Rawls and Roemer to specify the nature of this responsibility. I also draw on Benhabib to make a modified Rawlsian theory of justice less abstract while continuing to insist on the possibility and necessity of conversations between radically different social actors. The final part of the paper attends to questions of plausibility. I suggest that New Right and post-Left critiques of an expanded mainstream in development studies and policy are ethically deficient to the extent that they commend alternative development strategies without giving proper consideration to their costs and disbenefits. Development ethics, I conclude, is not just about questions of transnational justice and positionality; it is also about the construction of plausible alternative worlds and practical development policies. (shrink)
Aristotle's conception and use of ta endoxa are key points to our understanding of Aristotelian dialectic. But, nowadays, they are not of historical or hermeneutic importance alone, as, in Aristotle's treatment of endoxa, we still see a relevant contribution to the modern study of argumentation. I propose here an interpretation of endoxa to that effect: namely, as plausible propositions. This version is not only defensible in the Aristotelian context, it may also shed new light on some of his assumptions and (...) methodological shortcomings â e.g. concerning the 'plausible/implausible' pair â; finally, it will even enable us to show certain basic hints and guidelines, advanced by Aristotle's study of endoxa, which still serve nowadays to orientate our studies of argumentation from the angle of a theory of plausible argument currently under construction. These hints and guidelines suggest a pragmatic, gradual and comparative discursive concept of plausibility, and point, in particular, towards the reasonable dealing with, and weighing up of, differences of opinion within this frame of reference. (shrink)