Kit Fine develops a Fregean theory of abstraction, and suggests that it may yield a new philosophical foundation for mathematics, one that can account for both our reference to various mathematical objects and our knowledge of various mathematical truths. The Limits of Abstraction breaks new ground both technically and philosophically.
Philosophers have often claimed that general ideas or representations have their origin in abstraction, but it remains unclear exactly what abstraction as a psychological process consists in. We argue that the Lockean aspiration of using abstraction to explain the origins of all general representations cannot work and that at least some general representations have to be innate. We then offer an explicit framework for understanding abstraction, one that treats abstraction as a computational process that operates (...) over an innate quality space of fine-grained general representations. We argue that this framework has important philosophical implications for the nativism-empiricism dispute, for questions about the acquisition of unstructured representations, and for questions about the relation between human and animal minds. (shrink)
The process of abstraction and concretisation is a label used for an explicative theory of scientific model-construction. In scientific theorising this process enters at various levels. We could identify two principal levels of abstraction that are useful to our understanding of theory-application. The first level is that of selecting a small number of variables and parameters abstracted from the universe of discourse and used to characterise the general laws of a theory. In classical mechanics, for example, we select (...) position and momentum and establish a relation amongst the two variables, which we call Newton’s 2nd law. The specification of the unspecified elements of scientific laws, e.g. the force function in Newton’s 2nd law, is what would establish the link between the assertions of the theory and physical systems. In order to unravel how and with what conceptual resources scientific models are constructed, how they function and how they relate to theory, we need a view of theory-application that can accommodate our constructions of representation models. For this we need to expand our understanding of the process of abstraction to also explicate the process of specifying force functions etc. This is the second principal level at which abstraction enters in our theorising and in which I focus. In this paper, I attempt to elaborate a general analysis of the process of abstraction and concretisation involved in scientific- model construction, and argue why it provides an explication of the construction of models of the nuclear structure. (shrink)
The use of “levels of abstraction” in philosophical analysis (levelism) has recently come under attack. In this paper, I argue that a refined version of epistemological levelism should be retained as a fundamental method, called the method of levels of abstraction. After a brief introduction, in section “Some Definitions and Preliminary Examples” the nature and applicability of the epistemological method of levels of abstraction is clarified. In section “A Classic Application of the Method ofion”, the philosophical fruitfulness (...) of the new method is shown by using Kant’s classic discussion of the “antinomies of pure reason” as an example. In section “The Philosophy of the Method of Abstraction”, the method is further specified and supported by distinguishing it from three other forms of “levelism”: (i) levels of organisation; (ii) levels of explanation and (iii) conceptual schemes. In that context, the problems of relativism and antirealism are also briefly addressed. The conclusion discusses some of the work that lies ahead, two potential limitations of the method and some results that have already been obtained by applying the method to some long-standing philosophical problems. (shrink)
We characterize abstraction in computer science by first comparing the fundamental nature of computer science with that of its cousin mathematics. We consider their primary products, use of formalism, and abstraction objectives, and find that the two disciplines are sharply distinguished. Mathematics, being primarily concerned with developing inference structures, has information neglect as its abstraction objective. Computer science, being primarily concerned with developing interaction patterns, has information hiding as its abstraction objective. We show that abstraction (...) through information hiding is a primary factor in computer science progress and success through an examination of the ubiquitous role of information hiding in programming languages, operating systems, network architecture, and design patterns. (shrink)
Abstraction is seen as an active process which both enlightens and obscures. Abstractions are not true or false but relatively enlightening or obscuring according to the problem under study; different abstractions may grasp different aspects of a problem. Abstractions may be useless if they can answer questions only about themselves. A theoretical enterprise explores reality through acluster of abstractions that use different perspectives, temporal and horizontal scales, and assumes different givens.
Which abstraction principles are acceptable? A variety of criteria have been proposed, in particular irenicity, stability, conservativeness, and unboundedness. This note charts their logical relations. This answers some open questions and corrects some old answers.
This paper presents a formalization of first-order arithmetic characterizing the natural numbers as abstracta of the equinumerosity relation. The formalization turns on the interaction of a nonstandard (but still first-order) cardinality quantifier with an abstraction operator assigning objects to predicates. The project draws its philosophical motivation from a nonreductionist conception of logicism, a deflationary view of abstraction, and an approach to formal arithmetic that emphasizes the cardinal properties of the natural numbers over the structural ones.
Abstract: Laws of computer science are prescriptive in nature but can have descriptive analogs in the physical sciences. Here, we describe a law of conservation of information in network programming, and various laws of computational motion (invariants) for programming in general, along with their pedagogical utility. Invariants specify constraints on objects in abstract computational worlds, so we describe language and data abstraction employed by software developers and compare them to Floridi's concept of levels of abstraction. We also consider (...) Floridi's structural account of reality and its fit for describing abstract computational worlds. Being abstract, such worlds are products of programmers' creative imaginations, so any "laws" in these worlds are easily broken. The worlds of computational objects need laws in the form of self-prescribed invariants, but the suspension of these laws might be creative acts. Bending the rules of abstract reality facilitates algorithm design, as we demonstrate through the example of search trees. (shrink)
The neo-Fregean program in the philosophy of mathematics seeks a foundation for a substantial part of mathematics in abstraction principles—for example, Hume’s Principle: The number of Fs D the number of Gs iff the Fs and Gs correspond one-one—which can be regarded as implicitly definitional of fundamental mathematical concepts—for example, cardinal number. This paper considers what kind of abstraction principle might serve as the basis for a neo- Fregean set theory. Following a brief review of the main difficulties (...) confronting the most widely discussed proposal to date—replacing Frege’s inconsistent Basic Law V by Boolos’s New V which restricts concepts whose extensions obey the principle of extensionality to those which are small in the sense of being smaller than the universe—the paper canvasses an alternative way of implementing the limitation of size idea and explores the kind of restrictions which would be required for it to avoid collapse. (shrink)
I show how omissions lead to robustness and can justify distortions, and I give inferentially relevant explications of abstraction and idealization. Abstraction is explicated as the omission of all and only those claims that use a specific vocabulary; idealization is explicated as the distortion of only those claims that use a specific vocabulary. With these explications, abstraction can justify idealization. As examples of how abstraction justifies idealization and leads to robustness, I discuss Beauchamp and Childress's four (...) principles of biomedical ethics and the qualitative treatment of the Schrödinger equation. (shrink)
We develop a functional abstraction principle for the type-free algorithmic logic introduced in our earlier work. Our approach is based on the standard combinators but is supplemented by the novel use of evaluation trees. Then we show that the abstraction principle leads to a Curry fixed point, a statement C that asserts C ⇒ A where A is any given statement. When A is false, such a C yields a paradoxical situation. As discussed in our earlier work, this (...) situation leaves one no choice but to restrict the use of a certain class of implicational rules including modus ponens. (shrink)
ion is arguably one of the most important methods in modern science in analysing and understanding complex phenomena. In his book The Philosophy of Information, Floridi (The philosophy of information. Oxford University Press, Oxford, 2011) presents the method of levels of abstraction as the main method of the Philosophy of Information. His discussion of abstraction as a method seems inspired by the formal methods and frameworks of computer science, in which abstraction is operationalised extensively in programming languages (...) and design methodologies. Is it really clear what we should understand by levels of abstraction? How should they be specified? We will argue that levels of abstraction should be augmented with annotations, in order to express semantic information for them and reconcile the method of level of abstraction (LoA’s) with other approaches. We discuss the extended method when applied e.g. to the analysis of abstract machines. This will lead to an example in which the number of LoA’s is unbounded. (shrink)
Questions concerning the epistemological status of computer science are, in this paper, answered from the point of view of the formal verification framework. State space reduction techniques adopted to simplify computational models in model checking are analysed in terms of Aristotelian abstractions and Galilean idealizations characterizing the inquiry of empirical systems. Methodological considerations drawn here are employed to argue in favour of the scientific understanding of computer science as a discipline. Specifically, reduced models gained by Dataion are acknowledged as Aristotelian (...) abstractions that include only data which are sufficient to examine the interested executions. The present study highlights how the need to maximize incompatible properties is at the basis of both Abstraction Refinement, the process of generating a cascade of computational models to achieve a balance between simplicity and informativeness, and the Multiple Model Idealization approach in biology. Finally, fairness constraints, imposed to computational models to allow fair behaviours only, are defined as ceteris paribus conditions under which temporal formulas, formalizing software requirements, acquire the status of law-like statements about the software systems executions. (shrink)
This paper presents a new algorithm to find an appropriate similarityunder which we apply legal rules analogically. Since there may exist a lotof similarities between the premises of rule and a case in inquiry, we haveto select an appropriate similarity that is relevant to both thelegal rule and a top goal of our legal reasoning. For this purpose, a newcriterion to distinguish the appropriate similarities from the others isproposed and tested. The criterion is based on Goal-DependentAbstraction (GDA) to select a (...) similarity such that an abstraction basedon the similarity never loses the necessary information to prove the ground (purpose of legislation) of the legal rule. In order to cope withour huge space of similarities, our GDA algorithm uses some constraintsto prune useless similarities. (shrink)
This paper engages the controversy as to whether there is a link between Berkeley’s refutation of abstraction and his refutation of materialism. I argue that there is a strong link. In the opening paragraph I show that materialism being true requires and is required by the possibility of abstraction, and that the obviousness of this fact suggests that the real controversy is whether there is a link between Berkeley’s refutation of materialism and his refutation of the possibility of (...) framing abstract incomplete ideas and abstract general ideas. Although Berkeley can still defeat materialism without relying on his arguments that directly refute the possibility of framing abstract incomplete ideas and abstract general ideas, I contend that there is still a strong link between his refutation of materialism and his refutation of the possibility of framing these ideas. First, I show that the truth of the canonic version of materialism, according to which primary qualities are mindindependent and inhere in material substances, requires the possibility of the mind framing both of these ideas. Second, I show that there is a sense in which the truth of materialism is required by the possibility of either of these ideas. (shrink)
Human participants and recurrent (“connectionist”) neural networks were both trained on a categorization system abstractly similar to natural language systems involving irregular (“strong”) classes and a default class. Both the humans and the networks exhibited staged learning and a generalization pattern reminiscent of the Elsewhere Condition (Kiparsky, 1973). Previous connectionist accounts of related phenomena have often been vague about the nature of the networks’ encoding systems. We analyzed our network using dynamical systems theory, revealing topological and geometric properties that can (...) be directly compared with the mechanisms of non-connectionist, rule-based accounts. The results reveal that the networks “contain” structures related to mechanisms posited by rule-based models, partly vindicating the insights of these models. On the other hand, they support the one mechanism (OM), as opposed to the more than one mechanism (MOM), view of symbolic abstraction by showing how the appearance of MOM behavior can arise emergently from one underlying set of principles. The key new contribution of this study is to show that dynamical systems theory can allow us to explicitly characterize the relationship between the two perspectives in implemented models. (shrink)
Neo-Fregeans such as Bob Hale and Crispin Wright seek a foundation of mathematics based on abstraction principles. These are sentences involving a relation called the abstraction relation. It is usually assumed that abstraction relations must be equivalence relations, so reflexive, symmetric and transitive. In this article I argue that abstraction relations need not be reflexive. I furthermore give an application of non-reflexive abstraction relations to restricted abstraction principles.
This paper investigates the roles that abstraction and representation have in activities associated with language. Activities such as associative learning and counting require both the abilities to abstract from and accurately represent the environment. These activities are successfully carried out among vocal learners aside from humans, thereby suggesting that nonhuman animals share something like our capacity for abstraction and representation. The identification of these capabilities in other species provides additional insights into the development of language.
When surgery is performed on pregnant women forthe sake of the fetus (MFS or maternal fetalsurgery), it is often discussed in terms of thefetus alone. This usage exemplifies whatphilosophers call the fallacy of abstraction: considering a concept as if it were separablefrom another concept whose meaning isessentially related to it. In light of theirpotential separability, research on pregnantwomen raises the possibility of conflictsbetween the interests of the woman and those ofthe fetus. Such research should meet therequirement of equipoise, i.e., (...) a state ofgenuine uncertainty about the risks andbenefits of alternative interventions ornoninterventions. While illustrating thefallacy of abstraction in discussions of MFS,we review the rationale for explicitacknowledgment of the essential tie betweenfetus and pregnant woman. Next we examinewhether it is possible to meet the requirementof equipoise in research on MFS, focusing on afetal condition called myelomeningocele. Weshow how issues related to equipoise innonpregnant populations appear also in debatesregarding MFS. We also examine evidence insupport of claims that the requirement ofequipoise has been satisfied with respect to``the fetal patient'' while considering risks andbenefits to gestating women only marginally ornot at all. After delineating challenges andpossibilities for equipoise in MFS research, weconclude with a suggestion for avoiding thefallacy of abstraction and achieving equipoiseso that research on MFS may be ethicallyconducted. (shrink)
There is a growing consensus that the mental lexicon contains both abstract and word-specific acoustic information. To investigate their relative importance for word recognition, we tested to what extent perceptual learning is word specific or generalizable to other words. In an exposure phase, participants were divided into two groups; each group was semantically biased to interpret an ambiguous Mandarin tone contour as either tone1 or tone2. In a subsequent test phase, the perception of ambiguous contours was dependent on the exposure (...) phase: Participants who heard ambiguous contours as tone1 during exposure were more likely to perceive ambiguous contours as tone1 than participants who heard ambiguous contours as tone2 during exposure. This learning effect was only slightly larger for previously encountered than for not previously encountered words. The results speak for an architecture with prelexical analysis of phonological categories to achieve both lexical access and episodic storage of exemplars. (shrink)
Fitch's basic logic is an untyped illative combinatory logic with unrestricted principles of abstraction effecting a type collapse between properties (or concepts) and individual elements of an abstract syntax. Fitch does not work axiomatically and the abstraction operation is not a primitive feature of the inductive clauses defining the logic. Fitch's proof that basic logic has unlimited abstraction is not clear and his proof contains a number of errors that have so far gone undetected. This paper corrects (...) these errors and presents a reasonably intuitive proof that Fitch's system K supports an implicit abstraction operation. Some general remarks on the philosophical significance of basic logic, especially with respect to neo-logicism, are offered, and the paper concludes that basic logic models a highly intensional form of logicism. (shrink)
The dangers of character reification for cladistic inference are explored. The identification and analysis of characters always involves theory-laden abstraction—there is no theory-free “view from nowhere.” Given theory-ladenness, and given a real world with actual objects and processes, how can we separate robustly real biological characters from uncritically reified characters? One way to avoid reification is through the employment of objectivity criteria that give us good methods for identifying robust primary homology statements. I identify six such criteria and explore (...) each with examples. Ultimately, it is important to minimize character reification, because poor character analysis leads to dismal cladograms, even when proper phylogenetic analysis is employed. Given the deep and systemic problems associated with character reification, it is ironic that philosophers have focused almost entirely on phylogenetic analysis and neglected character analysis. (shrink)
What is wrong with abstraction, Michael Potter and Peter Sullivan explain a further objection to the abstractionist programme in the foundations of mathematics which they first presented in their Hale on Caesar and which they believe our discussion in The Reason's Proper Study misunderstood. The aims of the present note are: To get the character of this objection into sharper focus; To explore further certain of the assumptions—primarily, about reference-fixing in mathematics, about certain putative limitations of abstractionist set theory, (...) and about the effects of impredicativity in abstraction principles—which underlie it; and To advance the debate of the issues thereby raised. Thanks for helpful comments to Roy Cook and to an anonymous referee. CiteULike Connotea Del.icio.us What's this? (shrink)
This article addresses the so-called to human rights. Focusing specifically on the work of Onora O'Neill, the article challenges two important aspects of her version of this objection. First: its narrowness. O'Neill understands the claimability of a right to depend on the identification of its duty-bearers. But there is good reason to think that the claimability of a right depends on more than just that, which makes abstract (and not welfare) rights the most natural target of her objection (section II). (...) After examining whether we might address this reformulated version of O'Neill's objection by appealing to the specificity afforded to human rights in international, regional and domestic law (in section III), the article challenges a second important feature of that objection by raising doubts about whether claimability is a necessary feature of rights at all (section IV). Finally, the article reflects more generally on the role of abstraction in the theory and practice of human rights (section V). In sum, by allaying claimability-based concerns about abstract rights, and by illustrating some of the positive functions of abstraction in rights discourse, the article hopes to show that abstract rights are not only theoretically coherent but also useful and important. (shrink)
This paper argues for two related theses. The first is that mathematical abstraction can play an important role in shaping the way we think about and hence understand certain phenomena, an enterprise that extends well beyond simply representing those phenomena for the purpose of calculating/predicting their behaviour. The second is that much of our contemporary understanding and interpretation of natural selection has resulted from the way it has been described in the context of statistics and mathematics. I argue for (...) these claims by tracing attempts to understand the basis of natural selection from its early formulation as a statistical theory to its later development by R.A. Fisher, one of the founders of modern population genetics. Not only did these developments put natural selection of a firm theoretical foundation but its mathematization changed the way it was understood as a biological process. Instead of simply clarifying its status, mathematical techniques were responsible for redefining or reconceptualising selection. As a corollary I show how a highly idealised mathematical law that seemingly fails to describe any concrete system can nevertheless contain a great deal of accurate information that can enhance our understanding far beyond simply predictive capabilities. (shrink)
For various reasons several authors have enriched classical first order syntax by adding a predicate abstraction operator. “Conservatives” have done so without disturbing the syntax of the formal quantifiers but “revisionists” have argued that predicate abstraction motivates the universal quantifier’s re-classification from an expression that combines with a variable to yield a sentence from a sentence, to an expression that combines with a one-place predicate to yield a sentence. My main aim is to advance the cause of predicate (...)abstraction while cautioning against revisionism. In so doing, however, I shall pursue a secondary aim by conveying mixed blessings to those who hold the view that in the logical sense of “existence” some existing object is such as to exist contingently. Advocates of this view must concede Williamson’s recent contention that the domain of unrestricted objectual quantification could not have been narrower than it is actually, but predicate abstraction affords them some hope of accommodating this concession. (shrink)
Neo-Fregean logicism attempts to base mathematics on abstraction principles. Since not all abstraction principles are acceptable, the neo-Fregeans need an account of which ones are. One of the most promising accounts is in terms of the notion of stability; roughly, that an abstraction principle is acceptable just in case it is satisfiable in all domains of sufficiently large cardinality. We present two counterexamples to stability as a sufficient condition for acceptability and argue that these counterexamples can be (...) avoided only by major departures from the existing neo-Fregean programme. (shrink)
Experimental philosophers have disagreed about whether "the folk" are intuitively incompatibilists or compatibilists, and they have disagreed about the role of abstraction in generating such intuitions. New experimental evidence using Construal Level Theory is presented. The experiments support the views that the folk are intuitively both incompatibilists and compatibilists, and that abstract mental representations do shift intuitions, but not in a univocal way.
Ethicists of care have objected to traditional moral philosophy's reliance upon abstract universal principles. They claim that the use of abstraction renders traditional theories incapable of capturing morally relevant, particular features of situations. I argue that this objection sometimes conflates two different levels of moral thinking: the level of justification and the level of deliberation. Specifically, I claim that abstraction or attention to context at the level of justification does not entail, as some critics seem to think, a (...) commitment to abstraction or attention to context at the level of deliberation. It follows that critics who reject a theory's use of abstraction at the level of justification have not shown that the theory recommends abstraction at the level of deliberation and that it, therefore, compels the deliberating agent to overlook morally salient details. (shrink)
Little is known of Edmund Husserl's direct encounter with Georg Cantor's ideas on Platonic idealism and the abstraction of number concepts during the late 19th century, when Husserl's philosophical orientation changed considerably and definitely. Closely analyzing and comparing the two men's writings during that important time in their intellectual careers, I describe the crucial shift in Husserl's views on psychologism and metaphysical idealism as it relates to Cantor's philosophy of arithmetic. I thus establish connections between their ideas which have (...) been until now been virtually unsuspected and contribute to a better understanding of the development of Husserl's thought and of the philosophical and metaphysical ideas within which Cantor chose to frame his theories. (shrink)
We argue against theory-of-mind interpretation of recent false-belief experiments with young infants and explore two other interpretations: enactive and behavioral abstraction approaches. We then discuss the differences between these alternatives.
Frege suggests that criteria of identity should play a central role in the explanation of reference, especially to abstract objects. This paper develops a precise model of how we can come to refer to a particular kind of abstract object, namely, abstract letter types. It is argued that the resulting abstract referents are ‘metaphysically lightweight’.
This article is an extended critical study of Kit Fine’s The limits of abstraction, which is a sustained attempt to take the measure of the neo-logicist program in the philosophy and foundations of mathematics, founded on abstraction principles like Hume’s principle. The present article covers the philosophical and technical aspects of Fine’s deep and penetrating study.
While Hermann Lotze's philosophy was widely received all over the world, his views on abstraction and Platonic ideas are of particular interest because they were to a large extent adopted by one of the most eminent philosophers of the twentieth century, namely Edmund Husserl. In this paper these views are examined in three distinct aspects. The first of these aspects is to be found in Lotze's thesis that there is a mental process, prior to abstraction, whereby "first universals" (...) are apprehended. The second one lies in his view that there is yet a higher level of apprehension, as found in the process of abstraction itself. According to Lotze, abstraction is not to be identified with the mere removal of particular features, but rather the replacement of these with first universals, resulting in "general images" and ultimately concepts. In addition to Lotze's analysis of the cognition of universals, there is finally a third thesis (an ontological one) which is examined in this paper, namely that the universals are Platonic Ideas in the sense that they have "validity" (Geltung) independently of their corresponding particulars and also of the mind which grasps them. The three claims in question are examined here in detail. Also, an attempt is made to point out some of the connections between Lotze and Husserl on the topic under discussion. (shrink)
This paper discusses some aspects of the controversies regarding the operation of the agent intellect on sensory images. I selectively consider views developed between the 13th century and the beginning of the 17th century, focusing on positions which question the need for a (distinct) agent intellect or argue for its essential "inactivity" with respect to phantasms. My aim is to reveal limitations of the Peripatetical framework for analyzing and explaining the mechanisms involved in conceptual abstraction. The first section surveys (...) developments of Aristotelian noetics and abstraction in Ancient and Arabic philosophy. The second section presents a discussion of some "positive" accounts on abstraction and the agent intellect, and some "negative" accounts. (shrink)