Locke, Berkeley, Gentzen gave dierent justi cations of universal generalization. In particular, Gentzen's justi cation is the one currently used in most logic textbooks. In this paper I argue that all such justi cations are problematic, and propose an alternative justi cation which is related to the approach to generality of Greek mathematics.
This paper tries to show that Kims strategy of preventing the problem of generalization of mental causation is not successful and that his original supervenience argument can be applied to cases of nonmental macrolevel causation, with the effect that nonmental macroproperties which only supervene on, but are not identical with, configurations of microproperties turn out to be epiphenomenal after all.
The paper attempts to analyze in some detail the main problems encountered in reasoning using diagrams, which may cause errors in reasoning, produce doubts concerning the reliability of diagrams, and impressions that diagrammatic reasoning lacks the rigour necessary for mathematical reasoning. The paper first argues that such impressions come from long neglect which led to a lack of well-developed, properly tested and reliable reasoning methods, as contrasted with the amount of work generations of mathematicians expended on refining the methods of (...) reasoning with formulae and predicate calculus. Next, two main groups of problems occurring in diagrammatic reasoning are introduced. The second group, called diagram imprecision, is then briefly summarized, its detailed analysis being postponed to another paper. The first group, called collectively the generalization problem, is analyzed in detail in the rest of the paper. The nature and causes of the problems from this group are explained, methods of detecting the potentially harmful occurrences of these problems are discussed, and remedies for possible errors they may cause are proposed. Some of the methods are adapted from similar methods used in reasoning with formulae, several other problems constitute new, specifically diagrammatic ways of reliable reasoning. (shrink)
This paper analyzes four instances in talk of generalization about people, that is, of using statements about one or more people as the basis of stating something about a category. Generalization can be seen as a categorization practice which involves a reflexive relationship between the generalized-from person or people and the generalized-to category. One thing that is accomplished through generalization is instruction in how to understand the identity of the generalized-from person or people, so in addition to (...) being understood as a practice of categorization, generalization can also be understood as a practice of identification. Somewhat incidentally, this paper also illustrates the importance of certain methodological issues related to membership categorization analysis and contributes to the growing body of work that connects membership categorization analysis with sequential conversation analysis. (shrink)
Research in education and cognitive development suggests that explaining plays a key role in learning and generalization: When learners provide explanations—even to themselves—they learn more effectively and generalize more readily to novel situations. This paper proposes and tests a subsumptive constraints account of this effect. Motivated by philosophical theories of explanation, this account predicts that explaining guides learners to interpret what they are learning in terms of unifying patterns or regularities, which promotes the discovery of broad generalizations. Three experiments (...) provide evidence for the subsumptive constraints account: prompting participants to explain while learning artificial categories promotes the induction of a broad generalization underlying category membership, relative to describing items (Exp. 1), thinking aloud (Exp. 2), or free study (Exp. 3). Although explaining facilitates discovery, Experiment 1 finds that description is more beneficial for learning item details. Experiment 2 additionally suggests that explaining anomalous observations may play a special role in belief revision. The findings provide insight into explanation’s role in discovery and generalization. (shrink)
There are plausible objections to substitutional construals of generalization. But these objections do not apply to a substitutional construal of generalization proposed by Peter Geach several years ago. This paper examines Geach’s conception.
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘sampling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in (...) weak sampling, data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization. (shrink)
The paper contains a survey of (mainly unpublished) adaptive logics of inductive generalization. These defeasible logics are precise formulations of certain methods. Some attention is also paid to ways of handling background knowledge, introducing mere conjectures, and the research guiding capabilities of the logics.
According to the simple proposal, a predicate is rigid iff it signifies the same property across the different possible worlds. The simple proposal has been claimed to suffer from an over-generalization problem. Assume that one can make sense of predicates signifying properties, and assume that trivialization concerns, to the effect that the notion would cover any predicate whatsoever, can be overcome. Still, the proposal would over-generalize, the worry has it, by covering predicates for artifactual, social, or evaluative properties, such (...) as ‘is a knife,’ ‘is a bachelor,’ or ‘is funny.’ In defense, it is argued that rigidity for predicates as characterized plays the appropriate theoretical role, and that the contention that “unnatural” properties are not to be rigidly signified is ungrounded. (shrink)
Judging similarities among objects, events, and experiences is one of the most basic cognitive abilities, allowing us to make predictions and generalizations. The main assumption in similarity judgment is that people selectively attend to salient features of stimuli and judge their similarities on the basis of the common and distinct features of the stimuli. However, it is unclear how people select features from stimuli and how they weigh features. Here, we present a computational method that helps address these questions. Our (...) procedure combines image-processing techniques with a machine-learning algorithm and assesses feature weights that can account for both similarity and categorization judgment data. Our analysis suggests that a small number of local features are particularly important to explain our behavioral data. (shrink)
We argue that broad, simplegeneralizations, not specifically linked tocontingencies, will rarely approach truth in ecologyand evolutionary biology. This is because mostinteresting phenomena have multiple, interactingcauses. Instead of looking for single universaltheories to explain the great diversity of naturalsystems, we suggest that it would be profitable todevelop general explanatory frameworks. A frameworkshould clearly specify focal levels. The process orpattern that we wish to study defines our level offocus. The set of potential and actual states at thefocal level interacts with conditions at (...) thecontiguous lower and upper levels of organization,through sets of many-to-one and one-to-manyconnections. The number of initiating conditions andtheir permutations at the lower level define thepotential states at the focal level, whereas theactual state is constrained by the upper-levelboundary conditions. The most useful generalizationsare explanatory frameworks, which are road maps tosolutions, rather than solutions themselves. Suchframeworks outline what is understood about boundaryconditions and initiating conditions so that aninvestigator can pick and choose what is required toeffectively understand a specific event or situation. We discuss these relationships in terms of examplesinvolving sex ratio and mating behavior, competitivehierarchies, insect life-histories and the evolutionof sex. (shrink)
There is a productive and suggestive approach in philosophical logic based on the idea of generalized truth values. This idea, which stems essentially from the pioneering works by J.M. Dunn, N. Belnap, and which has recently been developed further by Y. Shramko and H. Wansing, is closely connected to the power-setting formation on the base of some initial truth values. Having a set of generalized truth values, one can introduce fundamental logical notions, more specifically, the ones of logical operations and (...) logical entailment. This can be done in two different ways. According to the first one, advanced by M. Dunn, N. Belnap, Y. Shramko and H. Wansing, one defines on the given set of generalized truth values a specific ordering relation (or even several such relations) called the logical order(s), and then interprets logical connectives as well as the entailment relation(s) via this ordering(s). In particular, the negation connective is determined then by the inversion of the logical order. But there is also another method grounded on the notion of a quasi-field of sets, considered by Białynicki-Birula and Rasiowa. The key point of this approach consists in defining an operation of quasi-complement via the very specific function g and then interpreting entailment just through the relation of set-inclusion between generalized truth values. In this paper, we will give a constructive proof of the claim that, for any finite set V with cardinality greater or equal 2, there exists a representation of a quasi-field of sets
isomorphic to de Morgan lattice. In particular, it means that we offer a special procedure, which allows to make our negation de Morgan and our logic relevant. (shrink)