It is plausible that there are epistemic reasons bearing on a distinctively epistemic standard of correctness for belief. It is also plausible that there are a range of practical reasons bearing on what to believe. These theses are often thought to be in tension with each other. Most significantly for our purposes, it is obscure how epistemic reasons and practical reasons might interact in the explanation of what one ought to believe. We draw an analogy with a similar distinction between (...) types of reasons for actions in the context of activities. The analogy motivates a two-level account of the structure of normativity that explains the interaction of correctness-based and other reasons. This account relies upon a distinction between normative reasons and authoritatively normative reasons. Only the latter play the reasons role in explaining what state one ought to be in. All and only practical reasons are authoritative reasons. Hence, in one important sense, all reasons for belief are practical reasons. But this account also preserves the autonomy and importance of epistemic reasons. Given the importance of having true beliefs about the world, our epistemic standard typically plays a key role in many cases in explaining what we ought to believe. In addition to reconciling (versions of) evidentialism and pragmatism, this two-level account has implications for a range of important debates in normative theory, including the interaction of right and wrong reasons for actions and other attitudes, the significance of reasons in understanding normativity and authoritative normativity, the distinction between ‘formal’ and ‘substantive’ normativity, and whether there is a unified source of authoritative normativity. (shrink)
Mark Schroeder has argued that all reasonable forms of inconsistency of attitude consist of having the same attitude type towards a pair of inconsistent contents (A-type inconsistency). We suggest that he is mistaken in this, offering a number of intuitive examples of pairs of distinct attitudes types with consistent contents which are intuitively inconsistent (B-type inconsistency). We further argue that, despite the virtues of Schroeder's elegant A-type expressivist semantics, B-type inconsistency is in many ways the more natural choice in developing (...) an expressivist account of moral discourse. We close by showing how to adapt ordinary formality-based accounts of logicality to define a B-type account of logical inconsistency and distinguish it from both semantic and pragmatic inconsistency. In sum, we provide a roadmap of how to develop a successful B-type expressivism. (shrink)
I argue that certain species of belief, such as mathematical, logical, and normative beliefs, are insulated from a form of Harman-style debunking argument whereas moral beliefs, the primary target of such arguments, are not. Harman-style arguments have been misunderstood as attempts to directly undermine our moral beliefs. They are rather best given as burden-shifting arguments, concluding that we need additional reasons to maintain our moral beliefs. If we understand them this way, then we can see why moral beliefs are vulnerable (...) to such arguments while mathematical, logical, and normative beliefs are not—the very construction of Harman-style skeptical arguments requires the truth of significant fragments of our mathematical, logical, and normative beliefs, but requires no such thing of our moral beliefs. Given this property, Harman-style skeptical arguments against logical, mathematical, and normative beliefs are self-effacing; doubting these beliefs on the basis of such arguments results in the loss of our reasons for doubt. But we can cleanly doubt the truth of morality. (shrink)
Etiquette and other merely formal normative standards like legality, honor, and rules of games are taken less seriously than they should be. While these standards aren’t intrinsically reason providing (or “substantive”) in the way morality is often taken to be, they also play an important role in our practical lives: we collectively treat them as important for assessing the behavior of ourselves and others and as licensing particular forms of sanction for violations. I here develop a novel account of the (...) normativity of formal standards where the role they play in our practical lives explains a distinctive kind of reason to obey them. We have this kind of reason to be polite because etiquette is important to us. We also have this kind of reason to be moral because morality is important to us. This parallel suggests the importance we assign to morality is insufficient to justify it being substantive. (shrink)
Expressivists explain the expression relation which obtains between sincere moral assertion and the conative or affective attitude thereby expressed by appeal to the relation which obtains between sincere assertion and belief. In fact, they often explicitly take the relation between moral assertion and their favored conative or affective attitude to be exactly the same as the relation between assertion and the belief thereby expressed. If this is correct, then we can use the identity of the expression relation in the two (...) cases to test the expressivist account as a descriptive or hermeneutic account of moral discourse. I formulate one such test, drawing on a standard explanation of Moore's paradox. I show that if expressivism is correct as a descriptive account of moral discourse, then we should expect versions of Moore's paradox where we explicitly deny that we possess certain affective or conative attitudes. I then argue that the constructions that mirror Moore's paradox are not incoherent. It follows that expressivism is either incorrect as a hermeneutic account of moral discourse or that the expression relation which holds between sincere moral assertion and affective or conative attitudes is not identical to the relation which holds between sincere non-moral assertion and belief. A number of objections are canvassed and rejected. (shrink)
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
I investigate syntactic notions of theoretical equivalence between logical theories and a recent objection thereto. I show that this recent criticism of syntactic accounts, as extensionally inadequate, is unwarranted by developing an account which is plausibly extensionally adequate and more philosophically motivated. This is important for recent anti-exceptionalist treatments of logic since syntactic accounts require less theoretical baggage than semantic accounts.
Philosophical arguments usually are and nearly always should be abductive. Across many areas, philosophers are starting to recognize that often the best we can do in theorizing some phenomena is put forward our best overall account of it, warts and all. This is especially true in esoteric areas like logic, aesthetics, mathematics, and morality where the data to be explained is often based in our stubborn intuitions. -/- While this methodological shift is welcome, it's not without problems. Abductive arguments involve (...) significant theoretical resources which themselves can be part of what's being disputed. This means that we will sometimes find otherwise good arguments which suggest their own grounds are problematic. In particular, sometimes revising our beliefs on the basis of such an argument can undermine the very justification we used in that argument. -/- This feature, which I'll call self-effacingness, occurs most dramatically in arguments against our standing views on the esoteric subject matters mentioned above: logic, mathematics, aesthetics, and morality. This is because these subject matters all play a role in how we reason abductively. This isn't an idle fact; we can resist some challenges to our standing beliefs about these subject matters exactly because the challenges are self-effacing. The self-effacing character of certain arguments is thus both a benefit and limitation of the abductive turn and deserves serious attention. I aim to give it the attention it deserves. (shrink)
I defend normative subjectivism against the charge that believing in it undermines the functional role of normative judgment. In particular, I defend it against the claim that believing that our reasons change from context to context is problematic for our use of normative judgments. To do so, I distinguish two senses of normative universality and normative reasons---evaluative universality and reasons and ontic universality and reasons. The former captures how even subjectivists can evaluate the actions of those subscribing to other conventions; (...) the latter explicates how their reasons differ from ours. I then show that four aspects of the functional role of normativity---evaluation of our and others actions and reasons, normative communication, hypothetical planning, and evaluating counternromative conditionals---at most requires our normative systems being evaluatively universal. Yet reasonable subjectivist positions need not deny evaluative universality. (shrink)
I distinguish two ways of developing anti-exceptionalist approaches to logical revision. The first emphasizes comparing the theoretical virtuousness of developed bodies of logical theories, such as classical and intuitionistic logic. I'll call this whole theory comparison. The second attempts local repairs to problematic bits of our logical theories, such as dropping excluded middle to deal with intuitions about vagueness. I'll call this the piecemeal approach. I then briefly discuss a problem I've developed elsewhere for comparisons of logical theories. Essentially, the (...) problem is that a pair of logics may each evaluate the alternative as superior to themselves, resulting in oscillation between logical options. The piecemeal approach offers a way out of this problem andthereby might seem a preferable to whole theory comparisons. I go on to show that reflective equilibrium, the best known piecemeal method, has deep problems of its own when applied to logic. (shrink)
Why do promises give rise to reasons? I consider a quadruple of possibilities which I think will not work, then sketch the explanation of the normativity of promising I find more plausible—that it is constitutive of the practice of promising that promise-breaking implies liability for blame and that we take liability for blame to be a bad thing. This effects a reduction of the normativity of promising to conventionalism about liability together with instrumental normativity and desire-based reasons. This is important (...) for a number of reasons, but the most important reason is that this style of account can be extended to account for nearly all normativity—one notable exception being instrumental normativity itself. Success in the case of promises suggests a general reduction of normativity to conventions and instrumental normativity. But success in the cases of promises is already quite interesting and does not depend essentially the general claim about normativity. (shrink)
Enthymemes are traditionally defined as arguments in which some elements are left unstated. It is an empirical fact that enthymemes are both enormously frequent and appropriately understood in everyday argumentation. Why is it so? We outline an answer that dispenses with the so called "principle of charity", which is the standard notion underlying most works on enthymemes. In contrast, we suggest that a different force drives enthymematic argumentation—namely, parsimony, i.e. the tendency to optimize resource consumption, in light of the agent's (...) goals. On this view, the frequent use of enthymemes does not indicate sub-optimal performance of arguers, requiring appeals to charity for their redemption. On the contrary, it is seen as a highly adaptive argumentation strategy, given the need of everyday reasoners to optimize their cognitive resources. Considerations of parsimony also affect enthymeme reconstruction, i.e. the process by which the interpreter makes sense of the speaker's enthymemes. Far from being driven by any pro-social cooperative instinct, interpretative efforts are aimed at extracting valuable information at reasonable costs from available sources. Thus, there is a tension between parsimony and charity, insofar as the former is a non-social constraint for self-regulation of one's behaviour, whereas the latter implies a pro-social attitude. We will argue that some versions of charity are untenable for enthymeme interpretation, while others are compatible with the view defended here, but still require parsimony to expose the ultimate reasons upon which a presumption of fair treatment in enthymeme reconstruction is founded. (shrink)
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of scientific theories. I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a detailed case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities with more (...) recent and popular heterodox logics such as naïve logics of truth. I use this discussion to motivate a constraint—logical partisanhood—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put. (shrink)
Sometimes a fact can play a role in a grounding explanation, but the particular content of that fact make no difference to the explanation—any fact would do in its place. I call these facts vacuous grounds. I show that applying the distinction between-vacuous grounds allows us to give a principled solution to Kit Fine and Stephen Kramer’s paradox of ground. This paradox shows that on minimal assumptions about grounding and minimal assumptions about logic, we can show that grounding is reflexive, (...) contra the intuitive character of grounds. I argue that we should never have accepted that grounding is irreflexive in the first place; the intuitions that support the irreflexive intuition plausibly only require that grounding be non-vacuously irreflexive. Fine and Kramer’s paradox relies, essentially, on a case of vacuous grounding and is thus no problem for this account. (shrink)
I respond to an interesting objection to my 2014 argument against hermeneutic expressivism. I argue that even though Toppinen has identified an intriguing route for the expressivist to tread, the plausible developments of it would not fall to my argument anyways---as they do not make direct use of the parity thesis which claims that expression works the same way in the case of conative and cognitive attitudes. I close by sketching a few other problems plaguing such views.
I argue that we can and should extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw on this extension to discuss the logical status of both abstraction operators and abstraction principles.
One of our purposes here is to expose something of the elementary logical structure of abductive reasoning, and to do so in a way that helps orient theorists to the various tasks that a logic of abduction should concern itself with. We are mindful of criticisms that have been levelled against the very idea of a logic of abduction; so we think it prudent to proceed with a certain diffidence. That our own account of abduction is itself abductive is methodological (...) expression of this diffidence. A second objective is to test our conception of abduction's logical structure against some of the more promising going accounts of abductive reasoning. We offer our various suggestions in a benignly advisory way. The primary targets of our advice is ourselves, meant as guides to work we have yet to complete or, in some instances, start. It is possible that our colleagues in the abduction research communities will find our counsel to be of some interest. But we repeat that our first concern is to try to get ourselves straight about what a logic of abduction should encompass. (shrink)
In all three of its manifestations, —abusive, circumstantial and tu quoque—the role of the ad hominem is to raise a doubt about the opposite party’s casemaking bona-fides.Provided that it is both presumptive and provisional, drawing such a conclusion is not a logical mistake, hence not a fallacy on the traditional conception of it. More remarkable is the role of the ad hominem retort in seeking the reassurance of one’s opponent when, on the face of it, reassurance is precisely what he (...) would seem to be ill-placed to give. Brief concluding remarks are given over to an examination of rival approaches to the ad hominem, especially those in which it is conceived of as a dialectical error. (shrink)
In a world plagued by disagreement and conflict one might expect that the exact sciences of logic and mathematics would provide a safe harbor. In fact these disciplines are rife with internal divisions between different, often incompatible, systems. Do these disagreements admit of resolution? Can such resolution be achieved without disturbing assumptions that the theorems of logic and mathematics state objective truths about the real world? In this original and historically rich book John Woods explores apparently intractable disagreements in logic (...) and the foundations of mathematics and sets out conflict resolution strategies that evade or disarm these stalemates. An important sub-theme of the book is the extent to which pluralism in logic and the philosophy of mathematics undermines realist assumptions. This book makes an important contribution to such areas of philosophy as logic, philosophy of language and argumentation theory. It will also be of interest to mathematicians and computer scientists. (shrink)
It is regrettably common for theorists to attempt to characterize the Humean dictum that one can’t get an ‘ought’ from an ‘is’ just in broadly logical terms. We here address an important new class of such approaches which appeal to model-theoretic machinery. Our complaint about these recent attempts is that they interfere with substantive debates about the nature of the ethical. This problem, developed in detail for Daniel Singer’s and Gillian Russell and Greg Restall’s accounts of Hume’s dictum, is of (...) a general type arising for the use of model-theoretic structures in cashing out substantive philosophical claims: the question of whether an abstract model-theoretic structure successfully interprets something often involves taking a stand on non-trivial issues surrounding the thing. In the particular case of Hume’s dictum, given reasonable conceptual or metaphysical claims about the ethical, Singer’s and Russell and Restall’s accounts treat obviously ethical claims as descriptive and vice versa. Consequently, their model-theoretic characterizations of Hume’s dictum are not metaethically neutral. This encourages skepticism about whether model-theoretic machinery suffices to provide an illuminating distinction between the ethical and the descriptive. (shrink)
Moore’s paradox, the infamous felt bizarreness of sincerely uttering something of the form “I believe grass is green, but it ain’t”—has attracted a lot of attention since its original discovery (Moore 1942). It is often taken to be a paradox of belief—in the sense that the locus of the inconsistency is the beliefs of someone who so sincerely utters. This claim has been labeled as the priority thesis: If you have an explanation of why a putative content could not be (...) coherently believed, you thereby have an explanation of why it cannot be coherently asserted. (Shoemaker 1995). The priority thesis, however, is insufficient to give a general explanation of Moore-paradoxical phenomena and, moreover, it’s false. I demonstrate this, then show how to give a commitment-theoretic account of Moore Paradoxicality, drawing on work by Bach and Harnish. The resulting account has the virtue of explaining not only cases of pragmatic incoherence involving assertions, but also cases of cognate incoherence arising for other speech acts, such as promising, guaranteeing, ordering, and the like. (shrink)
Traditionally, an enthymeme is an incomplete argument, made so by the absence of one or more of its constituent statements. An enthymeme resolution strategy is a set of procedures for finding those missing elements, thus reconstructing the enthymemes and restoring its meaning. It is widely held that a condition on the adequacy of such procedures is that statements restored to an enthymeme produce an argument that is good in some given respect in relation to which the enthymeme itself is bad. (...) In previous work, we emphasized the role of parsimony in enthymeme resolution strategies and concomitantly downplayed the role of charity . In the present paper, we take the analysis of enthymemes a step further. We will propose that if the pragmatic features that attend the phenomenon of enthymematic communication are duly heeded, the very idea of reconstructing enthymemes loses much of its rationale, and their interpretation comes to be conceived in a new light. (shrink)
Formal nonmonotonic systems try to model the phenomenon that common sense reasoners are able to “jump” in their reasoning from assumptions Δ to conclusions C without their being any deductive chain from Δ to C. Such jumps are done by various mechanisms which are strongly dependent on context and knowledge of how the actual world functions. Our aim is to motivate these jump rules as inference rules designed to optimise survival in an environment with scant resources of effort and time. (...) We begin with a general discussion and quickly move to Section 3 where we introduce five resource principles. We show that these principles lead to some well known nonmonotonic systems such as Nute’s defeasible logic. We also give several examples of practical reasoning situations to illustrate our principles. (shrink)
Consider the proposition, "Informal logic is a subdiscipline of philosophy". The best chance of showing this to be true is showing that informal logic is part of logic, which in turn is a part of philosophy. Part 1 is given over to the task of sorting out these connections. If successful, informal logic can indeed be seen as part of philosophy; but there is no question of an exclusive relationship. Part 2 is a critical appraisal of the suggestion that informal (...) logic is applied epistemology. Part 3 examines the claim that informal logic has failed to penetrate into mainstream philosophy, and suggestions for amelioration are considered. (shrink)
I show that the model-theoretic meaning that can be read off the natural deduction rules for disjunction fails to have certain desirable properties. I use this result to argue against a modest form of inferentialism which uses natural deduction rules to fix model-theoretic truth-conditions for logical connectives.
An agent-centered, goal-directed, resource-bound logic of human reasoning would do well to note that individual cognitive agency is typified by the comparative scantness of available cognitive resourcess ignorance-preserving character. My principal purpose here is to tie abduction’s scarce-resource adjustment capacity to its ignorance preservation.
In the human cognitive economy there are four grades of epistemic involvement. Knowledge partitions into distinct sorts, each in turn subject to gradations. This gives a fourwise partition on ignorance, which exhibits somewhat different coinstantiation possibilities. The elements of these partitions interact with one another in complex and sometimes cognitively fruitful ways. The first grade of knowledge I call “anselmian” to echo the famous declaration credo ut intelligam, that is, “I believe in order that I may come to know”. As (...) construed here, one knows in this anselmian way that E = mc2 just in case one knows that sentence expresses a true statement, but without having to understand the proposition it expresses. Most epistemologists ignore the significance of this grade of epistemic involvement. In a second grade of epistemic involvement, knowing that E = mc2 is knowing what that sentence means and understanding the proposition it express. This is knowledge in the propositional or semantic sense, and is the dominant target of epistemological investigation. Tacit and implicit knowledge occupies another tier. A typical example would be something that someone has “known all along” but, until now, hasn’t had occasion to put her mind to it or formulate in words. TI-knowledge remains a minority interest in today’s epistemology. Operating at a fourth grade of epistemic involvement is what I call “impact”-knowledge, which is the knowledge of a matter at its deepest and most widespread. An example, to be discussed below, is the knowledge that was generated by the Wiles proof of Fermat’s last theorem. Its true importance lies not only, or even mainly, in its verification of a commonly accepted fact about numbers, but rather in its enrichment of the mathematics of elliptical curves and the promise it holds for greater advancement into the mathematical unknown. Knowledge of this fourth grade has yet to find a seat in the parliaments of epistemology. Knowledge of the anselmian sort is independent of the other three. Tacit and implicit knowledge is incompatible with anselmian and semantic knowledge but coinstantiable with impact-knowledge. Semantic knowledge is incompatible with tacit and implicit knowledge but coinstantiable with the others. Impact-knowledge is pairwise coinstantiable with the others. Below I will bring the ignorance partitions into such alignment as they have with these ones. In doing so, I’ll propose a naturalized causal response epistemology designed to give these interactive distinctions the theoretical air they need to breathe. (shrink)
Much of cognitive science seeks to provide principled descriptions of various kinds and aspects of rational behaviour, especially in beings like us or AI simulacra of beings like us. For the most part, these investigators presuppose an unarticulated common sense appreciation of the rationality that such behaviour consists in. On those occasions when they undertake to bring the relevant norms to the surface and to give an account of that to which they owe their legitimacy, these investigators tend to favour (...) one or other of three approaches to the normativity question. They are the analyticity or truth-in-a-model approach; the pluralism approach; and the reffective equilibrium approach.All three of these approaches to the normativity question are seriously flawed, never mind that the first two have some substantial provenance among logicians and the third has enjoyed a flourishing philosophical career.Against these views, we propose a strong version of what might be called normatively immanent descriptivism. We attempt to elucidate its virtues and to deal with what appears to be its most central vulnerability, embodied in the plain fact that actual human behaviour is sometimes irrational. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
The purpose of this paper is to communicate some developments in what we call the new logic. In a nutshell the new logic is a model of the behaviour of a logical agent. By these lights, logical theory has two principal tasks. The first is an account of what a logical agent is. The second is a description of how this behaviour is to be modelled. Before getting on with these tasks we offer a disclaimer and a warning. The disclaimer (...) is that although the new logic is significantly different from it, we have no inclination to see the new logic as a rival of mathematical logic. We do not advocate the displacement of, e.g. model theory, but rather its supplementation or adaptation. The warning is that, whereas mathematical logic must eschew psychologism, the new logic cannot do without it. The fuller story of the new logic must eschew psychologism, the new logic cannot do without it. The fuller story of the new logic is part of our book, The Reach of Abduction, scheduled to appear in 2001 or early 2002. (shrink)
It is widely accepted by formal and informal logicians alike that a formal logic which, by the lights of English, gets the connectives wrong, nevertheless conspires to get entailment right—right that is, modulo English. There is a vexing problem occasioned by this semantic alienation of formal logic. It is next to impossible for formal logic to meet the expectations of realism. What, then, of informal logic?
This volume serves as a detailed introduction for those new to the field as well as a rich source of new insights and potential research agendas for those already engaged with the philosophy of economics.
The traditional Dung networks depict arguments as atomic and study the relationships of attack between them. This can be generalised in two ways. One is to consider various forms of attack, support, feedback, etc. Another is to add content to nodes and put there not just atomic arguments but more structure, e.g. proofs in some logic or simply just formulas from a richer language. This paper offers to use temporal and modal language formulas to represent arguments in the nodes of (...) a network. The suitable semantics for such networks is Kripke semantics. We also introduce a new key concept of usability of an argument. This is the beginning of a continuing research for adding contents to the nodes of an argumentation network. This research will allow us to address notions like ?what does it exactly mean for a node to attack another? or ?what does it mean for a network to be consistent? or ?can we give proper proof rules to manipulate networks?, and more. (shrink)
I argue that in order to apply the most common type of criteria for logicality, invariance criteria, to natural language, we need to consider both invariance of content—modeled by functions from contexts into extensions—and invariance of character—modeled, à la Kaplan, by functions from contexts of use into contents. Logical expressionsshould be invariant in both senses. If we do not require this, then old objections due to Timothy McCarthy and William Hanson, suitably modified, demonstrate that content invariant expressions can display intuitive (...) marks of non-logicality. If we do require this, we neatly avoid these objections while also managing to demonstrate desirable connections of logicality to necessity. The resulting view is more adequate as a demarcation of the logical expressions of natural language. (shrink)
This paper studies general numerical networks with support and attack. Our starting point is argumentation networks with the Caminada labelling of three values 1=in, 0=out and ½=undecided. This is generalised to arbitrary values in , which enables us to compare with other numerical networks such as predator?prey ecological networks, flow networks, logical modal networks and more. This new point of view allows us to see the place of argumentation networks in the overall landscape of networks and import and export ideas (...) to and from argumentation networks. We make a special effort to make clear how general concepts in general networks relate to the special case of argumentation networks. We pay special attention to the handling of loops and to the special features of numerical support. We find surprising connections with the Dempster?Shafer rule and with the cross-ratio in projective geometry. This paper is an expansion of our 2005 paper and so we also consider higher level features such as numerical attacks on attacks, and propagation of numerical values.We conclude with a brief view of temporal numerical argumentation and with a detailed comparison with related papers published since 2005. (shrink)