Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered (...) stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models. Key Words: additive clustering; Bayesian inference; categorization; concept learning; contrast model; features; generalization; psychological space; similarity. (shrink)
Research in education and cognitive development suggests that explaining plays a key role in learning and generalization: When learners provide explanations—even to themselves—they learn more effectively and generalize more readily to novel situations. This paper proposes and tests a subsumptive constraints account of this effect. Motivated by philosophical theories of explanation, this account predicts that explaining guides learners to interpret what they are learning in terms of unifying patterns or regularities, which promotes the discovery of broad generalizations. Three experiments (...) provide evidence for the subsumptive constraints account: prompting participants to explain while learning artificial categories promotes the induction of a broad generalization underlying category membership, relative to describing items (Exp. 1), thinking aloud (Exp. 2), or free study (Exp. 3). Although explaining facilitates discovery, Experiment 1 finds that description is more beneficial for learning item details. Experiment 2 additionally suggests that explaining anomalous observations may play a special role in belief revision. The findings provide insight into explanation’s role in discovery and generalization. (shrink)
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key ‘‘sampling’’ assumption about how the available data were generated. Previous models have considered two extreme possibilities, known as strong and weak sampling. In strong sampling, data are assumed to have been deliberately generated as positive examples of a concept, whereas in (...) weak sampling, data are assumed to have been generated without any restrictions. We develop a more general account of sampling that allows for an intermediate mixture of these two extremes, and we test its usefulness. In two experiments, we show that most people complete simple one-dimensional generalization tasks in a way that is consistent with their believing in some mixture of strong and weak sampling, but that there are large individual differences in the relative emphasis different people give to each type of sampling. We also show experimentally that the relative emphasis of the mixture is influenced by the structure of the available information. We discuss the psychological meaning of mixing strong and weak sampling, and possible extensions of our modeling approach to richer problems of inductive generalization. (shrink)
This paper defines the form of prior knowledge that is required for sound inferences by analogy and single-instance generalizations, in both logical and probabilistic reasoning. In the logical case, the first order determination rule defined in Davies (1985) is shown to solve both the justification and non-redundancy problems for analogical inference. The statistical analogue of determination that is put forward is termed 'uniformity'. Based on the semantics of determination and uniformity, a third notion of "relevance" is defined, both logically and (...) probabilistically. The statistical relevance of one function in determining another is put forward as a way of defining the value of information: The statistical relevance of a function F to a function G is the absolute value of the change in one's information about the value of G afforded by specifying the value of F. This theory provides normative justifications for conclusions projected by analogy from one case to another, and for generalization from an instance to a rule. The soundness of such conclusions, in either the logical or the probabilistic case, can be identified with the extent to which the corresponding criteria (determination and uniformity) actually hold for the features being related. (shrink)
The paper attempts to analyze in some detail the main problems encountered in reasoning using diagrams, which may cause errors in reasoning, produce doubts concerning the reliability of diagrams, and impressions that diagrammatic reasoning lacks the rigour necessary for mathematical reasoning. The paper first argues that such impressions come from long neglect which led to a lack of well-developed, properly tested and reliable reasoning methods, as contrasted with the amount of work generations of mathematicians expended on refining the methods of (...) reasoning with formulae and predicate calculus. Next, two main groups of problems occurring in diagrammatic reasoning are introduced. The second group, called diagram imprecision, is then briefly summarized, its detailed analysis being postponed to another paper. The first group, called collectively the generalization problem, is analyzed in detail in the rest of the paper. The nature and causes of the problems from this group are explained, methods of detecting the potentially harmful occurrences of these problems are discussed, and remedies for possible errors they may cause are proposed. Some of the methods are adapted from similar methods used in reasoning with formulae, several other problems constitute new, specifically diagrammatic ways of reliable reasoning. (shrink)
According to the simple proposal, a predicate is rigid iff it signifies the same property across the different possible worlds. The simple proposal has been claimed to suffer from an over-generalization problem. Assume that one can make sense of predicates signifying properties, and assume that trivialization concerns, to the effect that the notion would cover any predicate whatsoever, can be overcome. Still, the proposal would over-generalize, the worry has it, by covering predicates for artifactual, social, or evaluative properties, such (...) as 'is a knife,' 'is a bachelor,' or 'is funny.' In defense, it is argued that rigidity for predicates as characterized plays the appropriate theoretical role, and that the contention that "unnatural" properties are not to be rigidly signified is ungrounded. (shrink)
The paper contains a survey of (mainly unpublished) adaptive logics of inductive generalization. These defeasible logics are precise formulations of certain methods. Some attention is also paid to ways of handling background knowledge, introducing mere conjectures, and the research guiding capabilities of the logics.
The universal generalization problem is the question: What entitles one to conclude that a property established for an individual object holds for any individual object in the domain? This amounts to the question: Why is the rule of universal generalization justified? In the modern and contemporary age Descartes, Locke, Berkeley, Hume, Kant, Mill, Gentzen gave alternative solutions of the universal generalization problem. In this paper I consider Locke’s, Berkeley’s and Gentzen’s solutions and argue that they are problematic. (...) Then I consider an alternative formulation of universal generalization which depends on the view that mathematical objects are individual objects and are hypotheses introduced to solve mathematical problems, and that mathematical proofs are argument schemata. I argue that this alternative formulation allows one to overcome the problems of Locke’s, Berkeley’s and Gentzen’s solutions, and is related to the approach to generality in Greek mathematics. I also argue that there is a connection between the present formulation of universal generalization and a special form of the analogy rule which is implicit in Proclus’ approach to the universal generalization problem. (shrink)
This paper analyzes four instances in talk of generalization about people, that is, of using statements about one or more people as the basis of stating something about a category. Generalization can be seen as a categorization practice which involves a reflexive relationship between the generalized-from person or people and the generalized-to category. One thing that is accomplished through generalization is instruction in how to understand the identity of the generalized-from person or people, so in addition to (...) being understood as a practice of categorization, generalization can also be understood as a practice of identification. Somewhat incidentally, this paper also illustrates the importance of certain methodological issues related to membership categorization analysis and contributes to the growing body of work that connects membership categorization analysis with sequential conversation analysis. (shrink)
Lee and Baskerville (2003) attempted to clarify the concept of generalization and classify it into four types. In Tsang and Williams (2012) we objected to their account of generalization as well as their classification and offered repairs. Then we proposed a classification of induction, within which we distinguished five types of generalization. In their (2012) rejoinder, they argue that their classification is compatible with ours, claiming that theirs offers a ‘new language.’ Insofar as we resist this ‘new (...) language’ and insofar as they think that our position commits us to positivism and the rejection of interpretivism, they conclude both that our classification is more restrictive than theirs and also that we embrace ‘paradigmatic domination.’ Lee and Baskerville’s classification of generalization is based on a distinction between theoretical and empirical statements. Accordingly we will first clarify the terms ‘theoretical statement’ and ‘empirical statement.’ We note that they find no fault with our classification of induction, we re-state our main objections to their classification that remain unanswered and we show that their classification of generalizing is in fact incompatible with ours. We argue that their account of generalization retains fatal flaws, which means it should not be relied upon. We demonstrate that our classification is not committed to any paradigm and so we do not embrace ‘paradigmatic domination.’. (shrink)
Expanding on the results of previous contributions I advance several hypotheses on the interaction of physical and semiotic processes, both in organisms and in human artifacts. I then proceed to employ these ideas to formulate a general account of evolutionary processes in terms of concrete generalization, where, in analogy with conceptual generalization, novel creations retain antecedent features as special or restricted cases. I argue the following theses: 1) the main point of intersection of physical and semiotic causation is (...) the process of regulation; 2) The broadest form of regulation is a course of actions known as modulation ; 3) Modulation is a universal means of conveyance by which a form is lifted from one supporting vehicle and re-embodied into another replica. These considerations suggest viewing biological evolution as a complex generalization of semiotic evolution. Biological reproduction is then regarded as a complex generalization of sign replication. The latter is a relatively simple affair: the embodied form is external and largely independent of its indifferent supporting medium. Biological reproduction, on the contrary, is an extremely complex dynamical process in which the embodied type is duplicated from within the medium, through the subsidiary internal replication of molecular genetic records. These ideas are developed and illustrated through comparisons between the evolution of organisms and that of human artifacts. (shrink)
Generalization from a case study is a perennial issue in the methodology of the social sciences. The case study is one of the most important research designs in many social scientific fields, but no shared understanding exists of the epistemic import of case studies. This article suggests that the idea of mechanism-based theorizing provides a fruitful basis for understanding how case studies contribute to a general understanding of social phenomena. This approach is illustrated with a re- construction of Espeland (...) and Sauder's case study of the effects of rankings on US legal education. On the basis of the reconstruction, it is argued that, at least with respect to sociology, the idea of mechanism-based theorizing captures many of the generalizable elements of case studies. (shrink)
In order to be complete, Horwich’s minimalist theory must be able to deal with generalizations about truth. A logical and an epistemic-explanatory level of the generalization problem are distinguished, and Horwich’s responses to both sides of the problem are examined. Finally some persistent problems for minimalism are pointed out.
There are plausible objections to substitutional construals of generalization. But these objections do not apply to a substitutional construal of generalization proposed by Peter Geach several years ago. This paper examines Geach’s conception.
Over the past several decades, we devoted much energy to generating, reviewing and summarizing evidence. We have given far less attention to the issue of how to thoughtfully apply the evidence once we have it. That’s fine if all we care about is that our clinical decisions are evidence-based, but not so good if we also want them to be well-reasoned. Let us not forget that evidence based medicine (EBM) grew out of an interest in making medicine ‘rational’, with the (...) idea that rational clinical evaluations should be evidence-based. I agree with the uncontroversial statement that the best decision is supported, at least in part, by the best available evidence. Rationality, however, is constituted by reasoning, not evidence. Complete arguments are necessary for rational evaluations, arguments that begin with general evidence and end in a conclusion about a particular patient. In order to traverse these inferential gaps, medicine must address the issue of how to establish, as an intermediate premise, what the evidence has to say about the efficacy of an intervention for particular patients in a particular practice setting. (shrink)
Judging similarities among objects, events, and experiences is one of the most basic cognitive abilities, allowing us to make predictions and generalizations. The main assumption in similarity judgment is that people selectively attend to salient features of stimuli and judge their similarities on the basis of the common and distinct features of the stimuli. However, it is unclear how people select features from stimuli and how they weigh features. Here, we present a computational method that helps address these questions. Our (...) procedure combines image-processing techniques with a machine-learning algorithm and assesses feature weights that can account for both similarity and categorization judgment data. Our analysis suggests that a small number of local features are particularly important to explain our behavioral data. (shrink)
We argue that broad, simplegeneralizations, not specifically linked tocontingencies, will rarely approach truth in ecologyand evolutionary biology. This is because mostinteresting phenomena have multiple, interactingcauses. Instead of looking for single universaltheories to explain the great diversity of naturalsystems, we suggest that it would be profitable todevelop general explanatory frameworks. A frameworkshould clearly specify focal levels. The process orpattern that we wish to study defines our level offocus. The set of potential and actual states at thefocal level interacts with conditions at (...) thecontiguous lower and upper levels of organization,through sets of many-to-one and one-to-manyconnections. The number of initiating conditions andtheir permutations at the lower level define thepotential states at the focal level, whereas theactual state is constrained by the upper-levelboundary conditions. The most useful generalizationsare explanatory frameworks, which are road maps tosolutions, rather than solutions themselves. Suchframeworks outline what is understood about boundaryconditions and initiating conditions so that aninvestigator can pick and choose what is required toeffectively understand a specific event or situation. We discuss these relationships in terms of examplesinvolving sex ratio and mating behavior, competitivehierarchies, insect life-histories and the evolutionof sex. (shrink)