Chapter 5: The Benefits of Realism: A Realist Logic with Applications Barry Smith One major obstacle to realizing the general goal of building a bridge between computers and reality on the side of the patient is the existence of multiple, mutually incompatible – and, often impoverished – logical resources bequeathed to those working to improve Electronic Health Record (EHR) systems. In what follows, we will describe a logical framework that is more suitable for the purposes of the realist orientation and provide some examples of how it can be put to use. 1. The Background of First-Order Logic (FOL) In 1879, Gottlob Frege invented the first logical system with a logically perfected language as well as a system of grammatical transformations of the sentences in that language which facilitate processing of information expressed with the language. This system developed into the standard in contemporary symbolic logic, which is known as first-order logic (FOL). Contemporary computer languages, such as the Ontology Web Language (OWL), are fragments of FOL which have certain desired computational properties. The language of FOL consists of individual terms (constants and variables), representing things in reality; predicates, representing properties and relations; logical connectives such as 'and' and 'if...then...'; and quantifiers ('for every', 'there is some'). The range of variables is normally specified in advance, for example as all individuals, all persons, all numbers, and so forth. The quantifiers are then interpreted accordingly. In some cases quantification is said to be universal, and then the range of variables does not need to be specified – it comprehends, in a sense to be specified below, everything. As an illustration of the use of these ingredients, consider the assertion 'All horses' heads are animal heads. In FOL, this would read: For every individual x, if horse_head(x), then there is some individual y, such that animal(y) and head_of(x, y) from Katherine Munn and Barry Smith (eds.), Applied Ontology: An Introduction , Frankfurt/Lancaster: ontos/Walter de Gruyter, 2008 Or, to incorporate more of the standard FOL syntax, x[horse_head(x) y(animal(y) & head_of(x, y)] Here, the range of variables is all individuals; 'horse_head' and 'animal' are predicates applied to single individuals; and 'head_of ' represents a relation between two individuals. To assert that Secretariat is a horse and has a head, we would write: horse(Secretariat) & x[head(x) & head_of(x, Secretariat)] treating 'Secretariat' as a constant term. To assert that some horse has a head, we would write: y [horse(y) & x(head(x) & head_of(x, y))] First-order logic gets its name because the sentences in first-order language allow quantification (use of 'for every', and 'there is some') only in relation to what we can think of as 'first order entities', which means: entities in the range of the variables (which together form the universe of discourse), and thus not in relation to higher-order entities, such as the properties and relations to which the predicates in the language of FOL ('horse( )', etc.) correspond. On standard readings of FOL, the universe of discourse consists only of particular items such as persons or numbers. On these standard readings, to say that quantification is universal is to say that when we say 'for all x, such and such holds' then we are making an assertion about all individual entities in the universe. To make a general statement about objects of a given sort, this statement must be parsed as a conditional assertion. To express the fact that all dogs are four-legged, one has to write a sentence like, 'for every individual x, if dog(x) then fourlegged(x)'. The reader should notice that, given its conditional form ('if ... then ...'), using this sentence does not commit one to the existence of dogs or of four-legged beings. FOL's use of variables hereby allows one to forget that there are real, fundamental distinctions between the sorts of things that exist in reality. In fact, statements about dogs formulated in FOL can be perfectly well conceived as statements about any object in the universe whatsoever, namely that if it is a dog, then it is four-legged. Here the object plays no 110 essential role in the sentence. We do not even know, from the standard first-order sentence, whether or not any dogs exist. 2. A Realist Understanding of First-Order Logic In principle, the variables of FOL can range over entities of any sort. In standard practice, however, they have been largely conceived as ranging over individuals (particulars existing in space and time). In keeping with a broadly nominalist slant of most logically orientated philosophers of the 20th century, the universe, from this standard point of view, is the universe of individual things. In Smith (2005), an alternative conception of FOL was advanced, differing from this standard conception only in that it deviates, explicitly, from the standard nominalist reading of the range of variables of the original FOL. The alternative view is in the spirit, rather, of Aristotelian realism and accepts, in addition to individual things, universals (kinds, types) as entities in reality. The range of the variables, then, is conceived of as embracing, not only particulars, but also universals. The result is still FOL, in the sense that a distinction is drawn between predicate expressions, on one hand, and variable and constant terms, on the other. Quantification is still not allowed in relation to the former, and so the logic is still FOL of the perfectly standard sort. But because universals are included in the range of variables, we can now formulate assertions like, 'there is some quality which John has in virtue of which he is undergoing a rise in temperature' in this fashion: For some x [(quality(x) & inheres_in(x, John) & y(rise-intemperature(y) & causes(x, y))] A realist logic of this sort provides the tools needed to deal, in a rigorous way, with real-world instances, and to relate such instances to universals as well as to the general terms used in terminologies. Similarly, drawing on certain ideas worked out in Davidson (1980), it can relate individual things to the processes (events, occurrents) in which they participate. We can connect general terms to reality by defining the relationships between terms that refer to universals by way of the relationships between their instances (Smith, et al., 2004). In this way, we can provide a simple rigorous account of the relations captured by ontologies such as the Gene 111 Ontology. Thus, for two universals A and B we can define 'A part_of B' or 'B has_part A' as, respectively: Every instance of A is part of some instance of B, or Every instance of B has some instance of A as part, or in symbols: A part_of B =def. x [inst(x, A) y (inst(y, B) & x part_of y)]. In other words, A part_of B holds if and only if: For every individual x, if x instantiates A then there is some individual y such that y instantiates B and x is a part of y. Correspondingly, B has_part A =def. y [inst(y, B) x (inst(x, A) & x part_of y))], or in other words, B has_part A holds if and only if: for every individual y, if y instantiates B then there is some individual x such that x instantiates A and x is a part of y. Here 'inst' stands for the relation of instantiation between some individual entity and some universal; for example, between Mary and the universal human being. The parthood relations between universals treated by ontologists, hereby, are connected to the more primitive relation of part_of between instances, and this is involved, for example, when we say that 'this finger is part of this hand', or 'that step is part of that walk'. Note that assertions using 'part_of ' and 'has_part' are logically distinct. We can see this, for example, if we consider that, for A = cell nucleus and B = cell, the first is true, but the second is false. Along the same lines we can define also the ontologist's is_a (is a subtype of) relation as follows: A is_a B =def. x [inst(x, A) inst(x, B)]. In other words, every instance of universal A is an instance of universal B (as in: all human beings are mammals). We can quantify, too, over universals, for instance if we assert: 112 x [occurrent(x) y (y is_a continuant & z inst(z, y) & (z participates_in x)] This asserts that, for every occurrent instance, there is some entity y (a universal) which is a subtype of the universal continuant, and which is such that at least one of its instances z is a participant in the occurrent x. This ability to quantify over real-world universals and instances is one feature of realist logic that makes it suitable for use in ontology-based information systems. Its flexibility of quantification enables it to be used to track particular instances in EHRs and to link them to universals and, as we will see in Chapter 10, to build terminologies in such a way that their definitions reflect the knowledge that scientists actually have about a given universal, rather than about some associated concepts in their minds. 3. Concept Logic In the 1930s, the great Austrian terminologist Eugen Wüster laid down the central principles of the standard for terminologies propagated by the International Organization for Standardization (ISO) ever since. Unfortunately, instead of adopting FOL, Wüster opted for an older (and weaker) form of concept logic propagated inter alia by Kant, in which real-world objects play no essential role. First-order logic relates each term to instances in reality, and the logic is applied through the process of quantification, which draws the range of its variables from entities in reality. By contrast, instead of relations between terms and entities in reality, CL deals with relations between concepts, such as the narrower_than relation, which holds when one concept (for example, cervical cancer) is narrower in meaning than another concept (for example, cancer). (Thus, CL deals with general terms in the manner of the dictionary maker.) Now, clearly there are a number of connections between this narrower_than relation and the ontologist's standard is_a. However, from the perspective of CL, narrower_than is a relation between meanings which holds, equally, as a relation between mythical or fictional entities as between the entities in reality with which science deals. And, this is the case, similarly, for the other relations of Wüsterian concept logic. For example, ISO (2005) defines the whole-part relation as follows: this relationship covers situations in which one concept is inherently included in another, regardless of the context, so that the terms can be organized into logical hierarchies, with the whole treated as a broader term (p. 49). 113 Unfortunately, this fixation with concepts results in a logic that is not capable of capturing the logical distinction between universals and instances so that the part_of relation between, say, Toronto and Ontario, is treated as identical to that between brain and central nervous system (see ISO, 'Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies', 2005). A similar concept logic approach underlies much of the work on socalled semantic networks in the AI field in the 1970s (for an overview, see Sowa, 1992). Semantic networks were viewed, initially, with considerable optimism concerning their potential to support what is still called knowledge representation and reasoning (Brachman, 1979). The dawning awareness that this optimism was misplaced was a causal factor in the initial experiments in the direction of what would later come to be called Description Logics (DLs) (Nardi and Brachman, 2003). The latter fall squarely within the Fregean tradition – effectively, they are a family of computable fragments of FOL – and thus they, too, have some of the resources needed to deal with reasoning about instances. Unfortunately, however, while instances do indeed play a role in the DL world, the instances at issue in DL are often not of this world; thus they are not instances of the sorts encountered, for example, in clinical practice. Work within the DL community – which is often focused on mathematical proxies for real-world instances which exist inside artificial models created ad hoc – has led to significant developments in understanding. However, it has served the logicians' technical purposes of testing consistency and other properties of their systems, rather than the ontologists' practical purposes of relating a terminology to instances in reality. With its distinction between T-Box (for terminological knowledge (knowledge about concepts) and A-Box (containing data pertaining to the individual instances in spatio-temporal reality), certainly DL can support reasoning about both concepts and their instances in reality (Brachman, 1979). But the DL community has its roots in the traditional nominalist understanding of FOL, in which the variables and constant terms range over individual things exclusively. Thus, it has paid scant attention to the treatment of instances in different ontological categories; for example, to the differences between instances of attribute kinds (your temperature, your blood pressure) and instances of event or quality kinds (your breathing, your temperature). Similarly, applications of DL-based formalisms in medical terminologies such as GALEN, SNOMED CT, and the National Cancer Institute Thesaurus, have not exploited its resources 114 for reasoning about instances; rather, they have used the DL-structure as a tool for error-checking on the terminological level. And this is so, in spite of the fact that one central purpose to which such terminologies could be applied is to support the coding of EHRs which relate, precisely, to instances in reality. 4. 'Terminology' Defined Terminologies have certain parts and structures in common. Delineating these parts and structures will help us to obtain an explicit understanding of what a terminology is and, hence, of the advantages a terminology can provide if it is constructed along the lines of a realist orientation. In order to understand its components and structure, we may describe a terminology more technically as a graph-theoretic object (of the sort presented in Figure 1) consisting of nodes joined together by links, the whole indexed by version number. Multi-sorted logic enables us to codify this information into a formal definition of 'terminology' (Smith, et al., 2006). What are the common components of terminologies? First, are nodes, represented as the tips of branch-like structures. There are three kinds of information which a node may contain, namely, (1) a preferred term p, (2) any synonyms Sp which this term may have, and (3) (ideally) a definition d for that term (and its synonyms). Figure 1: Graph-theoretic Representation of the FMA Terminology 115 There are various different ways in which nodes can relate to one another in such a graph; for example, lower nodes can relate to higher ones in relations such as part_of, is_a, and so forth (for more on relations see Chapters 10 and 11). These relations among nodes are represented by links (L), the second kind of information which terminologies contain. Links may be represented visually as the branches which connect the nodes. Reality contains almost an infinite number of relations in which entities may stand to one another. Ideally, there would be as many kinds of links as there are kinds of relations. Realistically, however, a terminology is limited, and can only contain information about the most salient relations obtaining between the entities represented by terms in its nodes. Links contain two kinds of information, namely, (1) a description of the relation itself (r), and (2) a description of the way in which the relation obtains between the terms which the link connects (Lr, which describes p r q). Of course, these relations must either be explicitly defined or taken as primitives; in the latter case, they must be explicitly axiomatized so that their meaning is made clear. The third kind of information contained in terminologies pertains to the particular time (t) at which a particular version of a given terminology is in use. On a realist, scientifically oriented and evidence-based conception, our terminologies ought to evolve as our knowledge of the world evolves. It is crucial to keep track of these changes in our knowledge so that we know how terms are used now, and of the ways in which terms were used previously for describing our previous working view of what the world was like. Hence, each version of a terminology must be indexed according to a particular time. We can use a realist logic to provide a precise definition of a terminology and, thereby, to record information about terminologies themselves. Let n1, n2, n3,... name individual nodes in a terminology graph. Let L1, L2, L3,... name individual links. Let v1, v2, v3,... stand in for particular dates. A terminology, then, is an ordered triple: T = <N, L*, vn >, where: N is the set of nodes n1, n2, n3,... in the terminology, where each ni is a triple <p, Sp, d>, with p a preferred term, Sp a set of synonyms, and d a definition (ideally). L* is a the set of L1, L2, L3... where each Li is a link that consists of an ordered pair <r, Lr>, consisting of a relation designation r ('is_a', 'part_of ', etc.), together with a set Lr or ordered pairs <p, q> of 116 those preferred terms for which 'p rq' represents a consensus assertion of biomedical science about the corresponding universals at the time when the given terminology is prepared, and vn is a version number, which encodes this time. On our realist account, the variables p, q, d, r, v... stand simply and unambiguously for syntactic entities, or strings of characters in some regimented language. These syntactic entities include what are called preferred terms, which are the officially recommended representations of given universals in reality. Such preferred terms are recorded in the terminology, along with the various synonyms (the ways of referring to this universal) used by sub-communities of specialists. Such preferred terms may prove to be erroneous; that is, we may discover through scientific inquiry that a given term (for example 'phlogiston', or 'aura') corresponds to no universal and, thus, to no instances in reality. By contrast, according to the concept orientation the mentioned variables are seen as ranging, not over syntactic strings, but over concepts in people's minds. From the perspective of the concept orientation, there is a one-to-one correspondence between preferred terms and concepts, and this has the unfortunate result that every preferred term in a terminology is guaranteed a referent. So, for example, on the concept orientation there is no way to express the discovery that the term 'caloric' does not, in fact, correspond to anything in reality at all. Our realist account creates no such problem. Some terms within the range of our variables will not correspond to a universal in reality; like 'unicorn', 'phlogiston', or 'caloric', they will be empty names. Other terms represented by these variables will have the opposite problem in that they will correspond to too much in reality, that is, they will refer ambiguously to a plurality of universals. When evaluating terminologies, we need to take both of these alternatives into account by considering the entire terminology T = <N, L*, v> in light of its status as a map of an analogous structure of universals on the side of reality. In the ideal situation, where all of our terms perfectly represent universals in reality, we could indeed associate N in one-to-one fashion with some corresponding set U of the universals designated by its constituent nodes. However, really existing terminologies fall short of this ideal in the three ways identified in what we can think of as realist counterparts of Cimino's criteria of non-vagueness, non-ambiguity, and non-redundancy (Cimino, 1998). This means (roughly, and for our present 117 purposes) that, at any given stage, the nodes of any terminology will be divided into three groups N1 , N>, and N<. In other words, N = N1 N> N< where N1 consists of those in nodes in N whose preferred terms correspond to exactly one universal, N> of those nodes in N whose preferred terms correspond to more than one universal (in various combinations), and N< of those nodes in N whose preferred terms correspond to less than one universal (in the simplest case, to no universal at all). Our realist account assumes that, with the passage of time, N> and N< will become ever smaller, so that N1 will approximate N ever more closely. However, this assumption must be qualified in reflection of the fact that N is itself changing, as our knowledge of the salient universals in biomedical reality expands through new discoveries. Our knowledge of the successes medical science has had to date gives us strong reason to believe that N1 constitutes a large portion of N. N, remember, is a collection of terms already in use, each one of which is intended to represent a biomedical universal. N includes very many presently uncontroversial terms which we are normally inclined to overlook, such as 'heart' or 'tumor'. At the same time, our knowledge of the ways errors continue to be uncovered in specific terminologies gives us reason to believe that we have some way to go before N> and N< can be excised completely, if this will ever be possible. Moreover, we know a priori that at no stage (prior to that longed-for end to our labors that seems forever just out of reach) will we know precisely where the boundaries are to be drawn between N1, N>, and N<, that is, we will never know precisely which portions of N consist of the low value N>and N<-type terms. The reason for this is clear; if we did know where such terms were to be found, then we would already have the resources needed to expand the size of N1 correspondingly and, hence, to move its boundaries to a different position closer to N. However, on the realist orientation this unavoidable lack of knowledge of the boundaries of N1 is not a problem; since it is, after all, N, and not N1, which is the focus of the practical labors of ontologists. It is N which represents our (putative) consensus knowledge of the universals in the relevant domain of reality, at any given stage. Thus the whole of N, as far as the developers and users of a given terminology are concerned, consists of names of universals. 118 But if we do not know how the terms are presently distributed among the three groups, does this mean that the distinction between N1, N>, and N< is of purely theoretical interest, a matter of abstract philosophical housekeeping that is of no concrete significance for the day-to-day work of terminology development and application? Not at all. Typically, we will have, not just one version of a terminology, but a developing series of terminologies at our disposal. In uncovering errors immanent to a terminology, we thereby uncover terms which must be excluded from future versions because they do not correspond to universals. Given the resources of our realist approach, however, we do not need to wait for the actual discovery of error; for we can carry out experiments with terminologies themselves, which means that we can explore through simulations the consequences of different kinds of mismatch between our terms and reality. For more detail see Ceusters (2006), Ceusters and Smith (2006), and Ceusters, Spackman and Smith (2007). 5. A Formal Framework for Terminology Experimentation Once again, consider our scenario of the way in which a medical term describing a disease or a disorder is introduced into our language. The instances in our initial pool of cases, as well as certain regularities and patterns of irregularities (deviations from the norm) which they exemplify, are well known to the physicians involved. However, the universal which they instantiate is unknown. The challenge, in this case, is to solve for this unknown in a manner that is similar to the way in which astronomers postulated an unknown heavenly body, later identified as Pluto, in order to explain irregularities in the orbits of Uranus and Neptune. Three different kinds of solution can present themselves, as the cases of disorders in the pool are either (i) instances of exactly one universal, (ii) instances of no universal at all, or (iii) instances of more than one universal. In what follows, we will present a rigorous framework which is designed to put us in a position where we can extract certain kinds of valuable information from the resources provided by terminologies and EHRs. We believe that, in the long run, this information can enable terminologies and EHRs to play much larger roles in making themselves amenable to quality control, supporting decisions in the process of diagnosis of medical disorders, and facilitating scientific discoveries. Note that this idea will only be realizable in a future world of sophisticated EHRs in which instances in clinically salient categories are 119 tracked by means of instance unique identifiers (IUIs) of the sort described in Chapter 4. Each such IUI would be associated with other relevant information about the disorder or disease in question as it is expressed in a particular case. We can think of the result as a vector (an ordered n-tuple) of instance-information, comprehending coordinates for the following kinds of information: (1) the relevant terms in one or more terminologies; (2) cross-references to the IUIs assigned to those other particulars (such as patients) with which the disorder under scrutiny is related; and (3) the measured values of relevant attributes such as temperature and blood pressure, as well as bio-assay data such as gene expression. Each coordinate will then be indexed by time of entry, source, and estimated level of evidence. We will call the sum of all information that is pertinent to a particular manifestation of a disorder an instance vector. A definition of 'instance vector' will thus include variables for each of the following components: i an IUI, a preferred term p in a terminology, and the designation of a time at which the particular catalogued by i is asserted to be an instance of the universal (if any) designated by p (for details Ceusters and Smith, 2007). Thus, an instance vector can be expressed as an ordered triple, <i, p, t>. Suppose, for example, that i corresponds to patient Brown's hernia, p to the term 'hernia', and t to the time at which his hernia was discovered. Our goal is to see formally how a given terminology at a given time is linked to a given set of IUIs (containing information gathered for example by a single healthcare institution during a given period). In order to achieve this, we need a formal way of representing a terminology as it exists at a given time and as it corresponds with a set of instance vectors. We will call this combination of terminological information with instance information a tinstantiation, represented by the variable It . Thus, for a given set D of IUIs, we can define a t-instantiation It (T, D) of a terminology T = <N,L*,v> as: the set of all instance vectors <i, p, t> for i in D and p in N. For example, each record containing the IUI corresponding to patient Brown's hernia (i) at time t, where i is a IUI that is a member of the set D and 'hernia' is a term (p) in the terminology N. Next, we need a way to map the extension of the universal designated by the term p in the particular domain of reality selected for by D at time t, assuming that p does indeed designate a universal (we address this assumption below). In other words, we want to define for each term p the set of all IUIs for which the instance vector is included in the tinstantiation. We will call this the t-extension of p. 120 Our definition of t-extension enables us to examine, for each term p, its t-extensions for different values of D and t. This will enable us, in turn, to determine statistical patterns of different sorts, taking into account also, for each i, the other instance vectors in which i is involved through the relations in which the corresponding instances stand to other instances represented by IUIs in D. Our three alternative scenarios will then, once again, present themselves according to the status of each preferred term p in relation to the world of actual cases (the world which serves as standard for the truth and falsity of our assertions): 1. p is in N1(there is a single universal designated by p) and, in this case, the instances in It(T, D)(p) have in common a specific invariant pattern (which should be detectable through the application of appropriate statistically based tools); 2. p is in N> (p comprehends a plurality of universals, for example in a manner analogous to the term 'diabetes') and, in this case, the instances in It (T, D)(p) manifest no common pattern, but they (or the bulk of them) can be partitioned into some small number of subsets in such a way that the instances in each subset do instantiate such a pattern; 3. p is in N< (p corresponds to no universals) and, in this case, the instances in It(T, D)(p) manifest no common pattern, and there is no way of partitioning them (or the bulk of them) into a combination of one or a small number of subsets in such a way that all the instances in each subset instantiate such a pattern. 6. Reasoning with Instance Identifiers: Three Applications There are at least three applications for a system along the lines described. Such a system could be used, first of all, for purposes of quality-control of terminologies (and thus, for purposes of automatically generating improved versions of terminologies). For a given disorder term p, we gauge whether p is in N1, N>, or N< by applying statistical measures to the similarities between the vectors associated with each of the members of relevant instantiations. For example, two vectors are similar if the data they contain are close numerically (say, if two times are close to one another in a sequence), or if two terms represent the same or similar types, 121 or if they represent the same entity on the instance level (say, a set of IUIs signifies the same disorder in the same patient). Here is an example of the benefit of applying statistical measures to the similarities between vectors. If the measure of similarity between vectors is both roughly similar for all members of a given instantiation and also roughly constant across time when measures are applied to instances for which we have similar amounts of data of similarly high evidence-value, then this will constitute strong evidence for the thesis that p is in N1. If, on the other hand, we find high similarity for some disorder term before a certain time t but much lower degrees of similarity after some later time t+, then we can hypothesize that the relevant disorder, itself, has undergone some form of mutation, and we can experiment with adding new terms and then repartitioning the available sets of IUIs in such a way as to reach, once again, those high levels of similarity which are associated with the N1 case. In due course, such revision of terminologies will give rise, in the opposite direction, to revisions of the information associated as vectors to each of the relevant IUIs. We might, for example, discover that a given single disorder term has thus far been applied incorrectly to what are in fact instances of a plurality of distinct disorders. Such revision will lead, in turn, to better quality clinical record data, which may give rise to further revisions in our terminologies. Second, such methods for reasoning with terminology and instance data might be used for supporting decisions in the process of diagnosis. In a world of abundant instance data, one goal of an adequate terminologybased reasoning system would be to allow the clinician to experiment with alternative term-assignments to given collections of instance data in ways which would allow measurements which result in the greater and lesser likelihood of given diagnoses, on the basis of statistical properties of the patterns of association between terms and instances. Thus, we could imagine software which would allow experimentation with alternative IUI and term assignments; for example, when it is unclear whether successive clusters of symptoms in a given patient should be counted as manifestations of single or of multiple disorders. The machinery of instantiations, then, could be used to test out alternative hypotheses regarding how to classify given particulars by offering us the facility to experiment with different scenarios as concerns the division between N1, N<, and N< in relation to given cases. 122 In the real world, of course, such methods cannot be applied successfully in every case. For example, we may not have all the data needed to convince a computer armed with a stock of universal terms and associated instance data that a given case meets the requirements for any available diagnosis. Such a situation, however, is no different from that which is faced already by the practicing physician, who must decide from case to case how much data to collect (for example, how often to take the temperature of a given patient) in order to achieve a succession of better approximations to what then establishes itself as a good diagnosis. He learns how to do this, first, from medical textbooks and education, then through experience and by following guidelines and protocols. Finally, the methodology presented here can be used to facilitate scientific discoveries. Suppose, for example, that the length of a patient's nose is correlated with a certain specific disease, but that this fact is unknown to medical science. Why should anyone start to register the patient nose-length in the way that we do now for, say, temperature or blood pressure? The answer is that we do so already. Many hundreds of thousands of patients have undergone plastic surgery for cosmetic nose corrections. In each case, the length of the nose is measured as a matter of course. Many of these patients visited other physicians for totally different problems (before, at the same time, or later). If all the physicians involved had been exploiting the potential of referent tracking as here conceived, then it would not be difficult to correlate these data, using brute-force techniques such as cluster analysis, principal component analysis, or factor analysis, to tease out the correlation in question, in just the way that scientific discoveries are sometimes made on the basis of instance-level data in other domains. (For more details see; Ceusters W, Smith B. 'Referent Tracking and its Applications'.) 7. Conclusion When we take advantage of realist (instead of conceptual) logic, we can harness the information provided by these maps to accelerate our gains in knowledge about the world by keeping track of the instances which fall within the range of the variables of our logic. In the ideal case, a biomedical terminology would provide, not merely the resources for assigning preferred terms for universals to the corresponding instances in reality, but also a perspicuous map of how these universals themselves are related to each other in reality. As we conceive the EHR systems of the 123 future, instance data will be, to a large degree, automatically partitioned at the point of data entry in ways reflecting the structure of the world of clinically relevant universals. Currently, this partitioning of instances is masked from view in the clinical record because the instance-level data that exists in separate EHRs is accessible only via the detour of reference to the individual patient. A regime for the management of terminologies and clinical data along the lines described above, however, would allow us to directly map the instances that are salient to medical care in such a way as to mirror how the latter are related together in reality at the level of both instances and universals. In this way, it would make a new level of sophistication in reasoning about what it is on the side of the patient possible, which is the primary focus of medical care. 124 References Brachman, R. (1979). On the Epistemological Status of Semantic Networks. In N. Findler (Ed.), Associative Networks: Representation and Use of Knowledge by Computers (pp. 3-50). New York: Academic Press. Ceusters W. (2006). Towards a Realism-Based Metric for Quality Assurance in Ontology Matching. In B. Bennett & C. Fellbaum (Eds.), Formal Ontology in Information Systems: Proceedings of the Fourth International Conference (FOIS 2006) (pp. 321-332). Amsterdam: IOS Press. Ceusters, W. & Smith, B. (2006). A Realism-Based Approach to the Evolution of Biomedical Ontologies. In Proceedings of the Annual AMIA Symposium, Washington, DC (pp. 121-125). Ceusters, W., Spackman, K. A. & Smith, B. Would SNOMED CT benefit from Realism-Based Ontology Evolution? In Proceedings of the American Medical Informatics Association 2007 Annual Symposium (pp. 105-109). Ceusters W, Smith B. "Referent Tracking and its Applications", Proceedings of the Workshop WWW2007 Workshop i3: Identity, Identifiers, Identification, Banff, Canada, May 8, 2007, http://ceur-ws.org/Vol-249/.ISO (2005). Cimino, J. J. (1998). Desiderata for Controlled Medical Vocabularies in the Twenty-first Century. Methods of Information in Medicine, 37 (4-5), 394-403. 125 Smith, B. (2005). Against Fantology. In M. Reicher & J. Marek (Eds.), Experience and Analysis (pp. 153-170). Vienna: HPT&ÖPV. Smith, B., Kusnierczyk, W., Schober, D. & Ceusters, W. (2006). Towards a Reference Terminology for Ontology Research and Development in the Biomedical Domain. In O. Bodenreider (Ed.), Proceedings of the Second International Workshop on Formal Biomedical Knowledge Representation (KR-MED 2006) (pp. 57-65). Retrieved from http://www.CEUR-WS.org/Vol-222). Nardi, D. & Brachman, R. J. (2003). An Introduction to Description Logics. In F. Baader, et al. (Eds.), The Description Logics Handbook (pp. 1-40). Cambridge: Cambridge University Press. ISO (2005). Guidelines for the Construction, Format and Management of Monolingual Controlled Vocabularies. ANSI/ISO Z39.19-2005 ISBN: 1-880124-65-3. Geneva: International Organization for Standardization. Davidson, D. (1980). The Logical Form of Action Sentences. Essays on Actions and Events (pp. 105-122). Oxford: Clarendon Press.