The Unified Foundational Ontology (UFO) was developed over the last two decades by consistently putting together theories from areas such as formal ontology in philosophy, cognitive science, linguistics, and philosophical logics. It comprises a number of micro-theories addressing fundamental conceptual modeling notions, including entity types and relationship types. The aim of this paper is to summarize the current state of UFO, presenting a formalization of the ontology, along with the analysis of a number of cases to illustrate the application of (...) UFO and facilitate its comparison with other foundational ontologies in this special issue. (The cases originate from the First FOUST Workshop – the Foundational Stance, an international forum dedicated to Foundational Ontology research.). (shrink)
Types are fundamental for conceptual modeling and knowledge representation, being an essential construct in all major modeling languages in these fields. Despite that, from an ontological and cognitive point of view, there has been a lack of theoretical support for precisely defining a consensual view on types. As a consequence, there has been a lack of precise methodological support for users when choosing the best way to model general terms representing types that appear in a domain, and for building sound (...) taxonomic structures involving them. For over a decade now, a community of researchers has contributed to the development of the Unified Foundational Ontology (UFO) - aimed at providing foundations for all major conceptual modeling constructs. At the core of this enterprise, there has been a theory of types specially designed to address these issues. This theory is ontologically well- founded, psychologically informed, and formally characterized. These results have led to the development of a Conceptual Modelling language dubbed OntoUML, reflecting the ontological micro-theories comprising UFO. Over the years, UFO and OntoUML have been successfully employed on conceptual model design in a variety of domains including academic, industrial, and governmental settings. These experiences exposed improvement opportunities for both the OntoUML language and its underlying theory, UFO. In this paper, we revise the theory of types in UFO in response to empirical evidence. The new version of this theory shows that many of OntoUML’s meta-types (e.g. kind, role, phase, mixin) should be considered not as restricted to substantial types but instead should be applied to model endurant types in general, including relator types, quality types, and mode types. We also contribute with a formal characterization of this fragment of the theory, which is then used to advance a new metamodel for OntoUML (termed OntoUML 2). To demonstrate that the benefits of this approach are extended beyond OntoUML, the proposed formal theory is then employed to support the definition of UFO-based lightweight Semantic Web ontologies with ontological constraint checking in OWL. Additionally, we report on empirical evidence from the literature, mainly from cognitive psychology but also from linguistics, supporting some of the key claims made by this theory. Finally, we propose a computational support for this updated metamodel. (shrink)
We discuss the role of perceptron (or threshold) connectives in the context of Description Logic, and in particular their possible use as a bridge between statistical learning of models from data and logical reasoning over knowledge bases. We prove that such connectives can be added to the language of most forms of Description Logic without increasing the complexity of the corresponding inference problem. We show, with a practical example over the Gene Ontology, how even simple instances of perceptron connectives are (...) expressive enough to represent learned, complex concepts derived from real use cases. This opens up the possibility to import concepts learnt from data into existing ontologies. (shrink)
dolce, the first top-level ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. dolce is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. dolce clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, and (...) is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources. Being a foundational ontology, dolce is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years dolce has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases. (shrink)
Many aspects of how humans form and combine concepts are notoriously difficult to capture formally. In this paper, we focus on the representation of three particular such aspects, namely overexten- sion, underextension, and dominance. Inspired in part by the work of Hampton, we consider concepts as given through a prototype view, and by considering the interdependencies between the attributes that define a concept. To approach this formally, we employ a recently introduced family of operators that enrich Description Logic languages. These (...) operators aim to characterise complex concepts by collecting those instances that apply, in a finely controlled way, to ‘enough’ of the concept’s defin- ing attributes. Here, the meaning of ‘enough’ is technically realised by accumulating weights of satisfied attributes and comparing with a given threshold that needs to be met. (shrink)
We propose a formal framework to examine the relationship between models and observations. To make our analysis precise,models are reduced to first-order theories that represent both terminological knowledge – e.g., the laws that are supposed to regulate the domain under analysis and that allow for explanations, predictions, and simulations – and assertional knowledge – e.g., information about specific entities in the domain of interest. Observations are introduced into the domain of quantification of a distinct first-order theory that describes their nature (...) and their organization and takes track of the way they are experimentally acquired or intentionally elaborated. A model mainly represents the theoretical knowledge or hypotheses on a domain, while the theory of observations mainly represents the empirical knowledge and the given experimental practices. We propose a precise identity criterion for observations and we explore different links between models and observations by assuming a degree of independence between them. By exploiting some techniques developed in the field of social choice theory and judgment aggregation, we sketch some strategies to solve inconsistencies between a given set of observations and the assumed theoretical hypotheses. The solutions of these inconsistencies can impact both the observations – e.g., the theoretical knowledge and the analysis of the way observations are collected or produced may highlight some unreliable sources – and the models – e.g. empirical evidences may invalidate some theoretical laws. (shrink)
We introduce a number of logics to reason about collective propositional attitudes that are defined by means of the majority rule. It is well known that majoritarian aggregation is subject to irrationality, as the results in social choice theory and judgment aggregation show. The proposed logics for modelling collective attitudes are based on a substructural propositional logic that allows for circumventing inconsistent outcomes. Individual and collective propositional attitudes, such as beliefs, desires, obligations, are then modelled by means of minimal modalities (...) to ensure a number of basic principles. In this way, a viable consistent modelling of collective attitudes is obtained. (shrink)
Preference relations are intensively studied in Economics, but they are also approached in AI, Knowledge Representation, and Conceptual Modelling, as they provide a key concept in a variety of domains of application. In this paper, we propose an ontological foundation of preference relations to formalise their essential aspects across domains. Firstly, we shall discuss what is the ontological status of the relata of a preference relation. Secondly, we investigate the place of preference relations within a rich taxonomy of relations (e.g. (...) we ask whether they are internal or external, essential or contingent, descriptive or nondescriptive relations). Finally, we provide an ontological modelling of preference relation as a module of a foundational (or upper) ontology (viz. OntoUML). The aim of this paper is to provide a sharable foundational theory of preference relation that foster interoperability across the heterogeneous domains of application of preference relations. (shrink)
In recent years, there has been an increasing interest in the development of well-founded conceptual models for Service Management, Accounting Information Systems and Financial Reporting. Economic ex- changes are a central notion in these areas and they occupy a prominent position in frameworks such as the Resource-Event Action (REA) ISO Standard, service core ontologies (e.g., UFO-S) as well as financial stan- dards (e.g. OMG’s Financial Industry Business Ontology - FIBO). We present a core ontology for economic exchanges inspired by a (...) recent view on this phenomenon. According to this view, economic exchanges are based on an agreement on the actions that the agents are committed to perform. This view enables a unified treatment of economic exchanges, regardless the object of the transaction. We ground our core ontology on the Unified Foundational Ontology (UFO), discussing its formal and conceptual aspects, instantiating it as a reusable OntoUML model, and confronting it with the REA standard and the UFO-S service ontology. (shrink)
We analyse the computational complexity of three problems in judgment aggregation: (1) computing a collective judgment from a profile of individual judgments (the winner determination problem); (2) deciding whether a given agent can influence the outcome of a judgment aggregation procedure in her favour by reporting insincere judgments (the strategic manipulation problem); and (3) deciding whether a given judgment aggregation scenario is guaranteed to result in a logically consistent outcome, independently from what the judgments supplied by the individuals are (the (...) problem of the safety of the agenda). We provide results both for specific aggregation procedures (the quota rules, the premisebased procedure, and a distance-based procedure) and for classes of aggregation procedures characterised in terms of fundamental axioms. (shrink)
The problem of merging several ontologies has important applications in the Semantic Web, medical ontology engineering and other domains where information from several distinct sources needs to be integrated in a coherent manner.We propose to view ontology merging as a problem of social choice, i.e. as a problem of aggregating the input of a set of individuals into an adequate collective decision. That is, we propose to view ontology merging as ontology aggregation. As a first step in this direction, we (...) formulate several desirable properties for ontology aggregators, we identify the incompatibility of some of these properties, and we define and analyse several simple aggregation procedures. Our approach is closely related to work in judgment aggregation, but with the crucial difference that we adopt an open world assumption, by distinguishing between facts not included in an agent’s ontology and facts explicitly negated in an agent’s ontology. (shrink)
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation of (...) inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
In recent years, there has been an increasing interest in thedevelopment of ontologically well-founded conceptual models for Information Systems in areas such as Service Management, Accounting Information Systems and Financial Reporting. Economic exchanges are central phenomena in these areas. For this reason, they occupy a prominent position in modelling frameworks such as the REA (Resource-EventAction) ISO Standard as well as the FIBO (Financial Industry BusinessOntology). In this paper, we begin a well-founded ontological analysisof economic exchanges inspired by a recent ontological (...) view on the nature of economic transactions. According to this view, what counts asan economic transaction is based on an agreement on the actions thatthe agents are committed to perform. The agreement is in turn based on convergent preferences about the course of action to bring about. This view enables a unified treatment of economic exchanges, regardless the object of the transaction, and complies with the view that all economictransactions are about services. In this paper, we start developing our analysis in the framework of the Unified Foundational Ontology (UFO). (shrink)
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting (...) attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation. (shrink)
We argue that a cognitive semantics has to take into account the possibly partial information that a cognitive agent has of the world. After discussing Gärdenfors's view of objects in conceptual spaces, we offer a number of viable treatments of partiality of information and we formalize them by means of alternative predicative logics. Our analysis shows that understanding the nature of simple predicative sentences is crucial for a cognitive semantics.
Public deliberation has been defended as a rational and noncoercive way to overcome paradoxical results from democratic voting, by promoting consensus on the available alternatives on the political agenda. Some critics have argued that full consensus is too demanding and inimical to pluralism and have pointed out that single-peakedness, a much less stringent condition, is sufficient to overcome voting paradoxes. According to these accounts, deliberation can induce single-peakedness through the creation of a ‘meta-agreement’, that is, agreement on the dimension according (...) to which the issues at stake are ‘conceptualized’. We argue here that once all the conditions needed for deliberation to bring about single-peakedness through meta-agreement are unpacked and made explicit, meta-agreement turns out to be a highly demanding condition, and one that is very inhospitable to pluralism. (shrink)
In this paper, I investigate the relationship between preference and judgment aggregation, using the notion of ranking judgment introduced in List and Pettit. Ranking judgments were introduced in order to state the logical connections between the impossibility theorem of aggregating sets of judgments and Arrow’s theorem. I present a proof of the theorem concerning ranking judgments as a corollary of Arrow’s theorem, extending the translation between preferences and judgments defined in List and Pettit to the conditions on the aggregation procedure.
The impossibility results in judgement aggregation show a clash between fair aggregation procedures and rational collective outcomes. In this paper, we are interested in analysing the notion of rational outcome by proposing a proof-theoretical understanding of collective rationality. In particular, we use the analysis of proofs and inferences provided by linear logic in order to define a fine-grained notion of group reasoning that allows for studying collective rationality with respect to a number of logics. We analyse the well-known paradoxes in (...) judgement aggregation and we pinpoint the reasoning steps that trigger the inconsistencies. Moreover, we extend the map of possibility and impossibility results in judgement aggregation by discussing the case of substructural logics. In particular, we show that there exist fragments of linear logic for which general possibility results can be obtained. (shrink)
Informally speaking, a truthmaker is something in the world in virtue of which the sentences of a language can be made true. This fundamental philosophical notion plays a central role in applied ontology. In particular, a recent nonorthodox formulation of this notion proposed by the philosopher Josh Parsons, which we labelled weak truthamking, has been shown to be extremely useful in addressing a number of classical problems in the area of Conceptual Modeling. In this paper, after revisiting the classical notion (...) of truthmaking, we conduct an in depth analysis of Parsons’ account of weak truthmaking. By doing that, we expose some difficulties in his original formulation. As the main contribution of this paper, we propose solutions to address these issues which are then integrated in a new precise interpretation of truthmaking that is harmonizable with. (shrink)
This work contributes to the theory of judgement aggregation by discussing a number of significant non-classical logics. After adapting the standard framework of judgement aggregation to cope with non-classical logics, we discuss in particular results for the case of Intuitionistic Logic, the Lambek calculus, Linear Logic and Relevant Logics. The motivation for studying judgement aggregation in non-classical logics is that they offer a number of modelling choices to represent agents’ reasoning in aggregation problems. By studying judgement aggregation in logics that (...) are weaker than classical logic, we investigate whether some well-known impossibility results, that were tailored for classical logic, still apply to those weak systems. (shrink)
We introduce a family of operators to combine Description Logic concepts. They aim to characterise complex concepts that apply to instances that satisfy \enough" of the concept descriptions given. For instance, an individual might not have any tusks, but still be considered an elephant. To formalise the meaning of "enough", the operators take a list of weighted concepts as arguments, and a certain threshold to be met. We commence a study of the formal properties of these operators, and study some (...) variations. The intended applications concern the representation of cognitive aspects of classi cation tasks: the interdependencies among the attributes that de ne a concept, the prototype of a concept, and the typicality of the instances. (shrink)
Relevant logics provide an alternative to classical implication that is capable of accounting for the relationship between the antecedent and the consequence of a valid implication. Relevant implication is usually explained in terms of information required to assess a proposition. By doing so, relevant implication introduces a number of cognitively relevant aspects in the de nition of logical operators. In this paper, we aim to take a closer look at the cognitive feature of relevant implication. For this purpose, we develop (...) a cognitively-oriented interpretation of the semantics of relevant logics. In particular, we provide an interpretation of Routley-Meyer semantics in terms of conceptual spaces and we show that it meets the constraints of the algebraic semantics of relevant logic. (shrink)
DOLCE, the first top-level (foundational) ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. dolce is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. dolce clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, (...) and is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources (e.g. CIDOC CRM, DBpedia and WordNet). Being a foundational ontology, dolce is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years dolce has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases. (shrink)
We present an ontological analysis of the notion of group agency developed by Christian List and Philip Pettit. We focus on this notion as it allows us to neatly distinguish groups, organizations, corporations – to which we may ascribe agency – from mere aggregates of individuals. We develop a module for group agency within a foundational ontology and we apply it to organizations.
How can organisations survive not only the substitution of members, but also other dramatic changes, like that of the norms regulating their activities, the goals they plan to achieve, or the system of roles that compose them? This paper is as first step towards a well-founded ontological analysis of the persistence of organisations through changes. Our analysis leverages Kit Fine’s notions of rigid and variable embodiment and proposes to view the (history of the) decisions made by the members of the (...) organisation as the criterion to re-identify the organisation through change. (shrink)
Among the possible solutions to the paradoxes of collective preferences, single-peakedness is significant because it has been associated to a suggestive conceptual interpretation: a single-peaked preference profile entails that, although individuals may disagree on which option is the best, they conceptualize the choice along a shared unique dimension, i.e. they agree on the rationale of the collective decision. In this article, we discuss the relationship between the structural property of singlepeakedness and its suggested interpretation as uni-dimensionality of a social choice. (...) In particular, we offer a formalization of the relationship between single-peakedness and its conceptual counterpart, we discuss their logical relations, and we question whether single-peakedness provides a rationale for collective choices. (shrink)
We show that logic has more to offer to ontologists than standard first order and modal operators. We first describe some operators of linear logic which we believe are particularly suitable for ontological modeling, and suggest how to interpret them within an ontological framework. After showing how they can coexist with those of classical logic, we analyze three notions of artifact from the literature to conclude that these linear operators allow for reducing the ontological commitment needed for their formalization, and (...) even simplify their logical formulation. (shrink)
We study a fragment of Intuitionistic Linear Logic combined with non-normal modal operators. Focusing on the minimal modal logic, we provide a Gentzen-style sequent calculus as well as a semantics in terms of Kripke resource models. We show that the proof theory is sound and complete with respect to the class of minimal Kripke resource models. We also show that the sequent calculus allows cut elimination. We put the logical framework to use by instantiating it as a logic of agency. (...) In particular, we apply it to reason about the resource-sensitive use of artefacts. (shrink)
We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...) a model-agnostic distillation approach exemplified with these two frameworks, secondly, to study how these two frameworks interact on a theoretical level, and, thirdly, to investigate use-cases in ML and AI in a comparative manner. Specifically, we envision that user-studies will help determine human understandability of explanations generated using these two frameworks. (shrink)
In knowledge representation, socio-technical systems can be modeled as multiagent systems in which the local knowledge of each individual agent can be seen as a context. In this paper we propose formal ontologies as a means to describe the assumptions driving the construction of contexts as local theories and to enable interoperability among them. In particular, we present two alternative conceptualizations of the notion of sociomateriality (and entanglement), which is central in the recent debates on socio-technical systems in the social (...) sciences, namely critical and agential realism. We thus start by providing a model of entanglement according to the critical realist view, representing it as a property of objects that are essentially dependent on different modules of an already given ontology. We refine then our treatment by proposing a taxonomy of sociomaterial entanglements that distinguishes between ontological and epistemological entanglement. In the final section, we discuss the second perspective, which is more challenging form the point of view of knowledge representation, and we show that the very distinction of information into modules can be at least in principle built out of the assumption of an entangled reality. (shrink)
We show how to embed a framework for multilateral negotiation, in which a group of agents implement a sequence of deals concerning the exchange of a number of resources, into linear logic. In this model, multisets of goods, allocations of resources, preferences of agents, and deals are all modelled as formulas of linear logic. Whether or not a proposed deal is rational, given the preferences of the agents concerned, reduces to a question of provability, as does the question of whether (...) there exists a sequence of deals leading to an allocation with certain desirable properties, such as maximising social welfare. Thus, linear logic provides a formal basis for modelling convergence properties in distributed resource allocation. (shrink)
In this paper, I discuss the analysis of logic in the pragmatic approach recently proposed by Brandom. I consider different consequence relations, formalized by classical, intuitionistic and linear logic, and I will argue that the formal theory developed by Brandom, even if provides powerful foundational insights on the relationship between logic and discursive practices, cannot account for important reasoning patterns represented by non-monotonic or resource-sensitive inferences. Then, I will present an incompatibility semantics in the framework of linear logic which allow (...) to refine Brandom’s concept of defeasible inference and to account for those non-monotonic and relevant inferences that are expressible in linear logic. Moreover, I will suggest an interpretation of discursive practices based on an abstract notion of agreement on what counts as a reason which is deeply connected with linear logic semantics. (shrink)
We show that linear logic can serve as an expressive framework in which to model a rich variety of combinatorial auction mechanisms. Due to its resource-sensitive nature, linear logic can easily represent bids in combinatorial auctions in which goods may be sold in multiple units, and we show how it naturally generalises several bidding languages familiar from the literature. Moreover, the winner determination problem, i.e., the problem of computing an allocation of goods to bidders producing a certain amount of revenue (...) for the auctioneer, can be modelled as the problem of finding a proof for a particular linear logic sequent. (shrink)
We present a preliminary high-level formal theory, grounded on knowledge representation techniques and foundational ontologies, for the uniform and integrated representation of the different kinds of (quali- tative and quantitative) knowledge involved in the designing process. We discuss the conceptual nature of engineering design by individuating and analyzing the involved notions. These notions are then formally charac- terized by extending the DOLCE foundational ontology. Our ultimate purpose is twofold: (i) to contribute to foundational issues of design; and (ii) to support (...) the development of advanced modelling systems for (qualitative and quantitative) representation of design knowledge. (shrink)
The theory of collective agency and intentionality is a flourishing field of research, and our understanding of these phenomena has arguably increased greatly in recent years. Extant theories, however, are still ill-equipped to explain certain aspects of collective intentionality. In this article we draw attention to two such underappreciated aspects: the failure of the intentional states of collectives to supervene on the intentional states of their members, and the role of non-human factors in collective agency and intentionality. We propose a (...) theory of collective intentionality which builds on the ‘interpretationist’ tradition in metasemantics and the philosophy of mind as initiated by David Lewis and recently developed further by Robbie Williams. The collective-level analogue of interpretationism turns out to look different in some ways from the individual-level theory, but is well-suited to accommodating phenomena such as hybrid collective intentionality. Complemented with Kit Fine’s theory of variable embodiment, such a theory also provides a diachronic account of intentional collectives. (shrink)
We present an algorithm for concept combination inspired and informed by the research in cognitive and experimental psychology. Dealing with concept combination requires, from a symbolic AI perspective, to cope with competitive needs: the need for compositionality and the need to account for typicality effects. Building on our previous work on weighted logic, the proposed algorithm can be seen as a step towards the management of both these needs. More precisely, following a proposal of Hampton [1], it combines two weighted (...) Description Logic formulas, each defining a concept, using the following general strategy. First it selects all the features needed for the combination, based on the logical distinc- tion between necessary and impossible features. Second, it determines the threshold and assigns new weights to the features of the combined concept trying to preserve the relevance and the necessity of the features. We illustrate how the algorithm works exploiting some paradigmatic examples discussed in the cognitive literature. (shrink)
For over a decade now, a community of researchers has contributed to the development of the Unified Foundational Ontology (UFO) - aimed at providing foundations for all major conceptual modeling constructs. This ontology has led to the development of an Ontology-Driven Conceptual Modeling language dubbed OntoUML, reflecting the ontological micro-theories comprising UFO. Over the years, UFO and OntoUML have been successfully employed in a number of academic, industrial and governmental settings to create conceptual models in a variety of different domains. (...) These experiences have pointed out to opportunities of improvement not only to the language itself but also to its underlying theory. In this paper, we take the first step in that direction by revising the theory of types in UFO in response to empirical evidence. The new version of this theory shows that many of the meta-types present in OntoUML (differentiating Kinds, Roles, Phases, Mixins, etc.) should be considered not as restricted to Substantial types but instead should be applied to model Endurant Types in general, including Relator types, Quality types and Mode types. We also contribute a formal characterization of this fragment of the theory, which is then used to advance a metamodel for OntoUML 2.0. Finally, we propose a computational support tool implementing this updated metamodel. (shrink)
Ontologies represent principled, formalised descriptions of agents’ conceptualisations of a domain. For a community of agents, these descriptions may differ among agents. We propose an aggregative view of the integration of ontologies based on Judgement Aggregation (JA). Agents may vote on statements of the ontologies, and we aim at constructing a collective, integrated ontology, that reflects the individual conceptualisations as much as possible. As several results in JA show, many attractive and widely used aggregation procedures are prone to return inconsistent (...) collective ontologies. We propose to solve the possible inconsistencies in the collective ontology by applying suitable weakenings of axioms that cause inconsistencies. (shrink)
Modelling Equivalent Definitions of Concepts.Daniele Porello - 2015 - In Modeling and Using Context - 9th International and Interdisciplinary Conference, {CONTEXT} 2015, Lanarca, Cyprus, November 2-6, 2015. Proceedings. Lecture Notes in Computer Science 9405. pp. 506-512.details
We introduce the notions of syntactic synonymy and referential syn- onymy due to Moschovakis. Those notions are capable of accounting for fine- grained aspects of the meaning of linguistic expressions, by formalizing the Fregean distinction between sense and denotation. We integrate Moschovakis’s theory with the theory of concepts developed in the foundational ontology DOLCE, in order to enable a formal treatment of equivalence between concepts.
In this paper, building on these previous works, we propose to go deeper into the understanding of crowd behavior by proposing an approach which integrates ontologi- cal models of crowd behavior and dedicated computer vision algorithms, with the aim of recognizing some targeted complex events happening in the playground from the observation of the spectator crowd behavior. In order to do that, we first propose an ontology encoding available knowledge on spectator crowd behavior, built as a spe- cialization of the (...) DOLCE foundational ontology, which allows the representation of categories belonging both to the physical and to the social realms. We then propose a simplified and tractable version of such ontology in a new temporal extension of a description logic, which is used for temporally coupling events happening on the play- ground and spectator crowd behavior. At last, computer vision algorithms provide the input information concerning what is observed on the stands and ontological reasoning delivers the output necessary to perform complex event recognition. (shrink)
We propose a logic to reason about data collected by a num- ber of measurement systems. The semantic of this logic is grounded on the epistemic theory of measurement that gives a central role to measure- ment devices and calibration. In this perspective, the lack of evidences (in the available data) for the truth or falsehood of a proposition requires the introduction of a third truth-value (the undetermined). Moreover, the data collected by a given source are here represented by means (...) of a possible world, which provide a contextual view on the objects in the domain. We approach (possibly) conflicting data coming from different sources in a social choice theoretic fashion: we investigate viable opera- tors to aggregate data and we represent them in our logic by means of suitable (minimal) modal operators. (shrink)
It is widely recognized that accurately identifying and classifying competitors is a challenge for many companies and entrepreneurs. Nonetheless, it is a paramount activity which provide valuable insights that affect a wide range of strategic decisions. One of the main challenges in competitor identification lies in the complex nature of the competitive relationships that arise in business envi- ronments. These have been extensively investigate over the years, which lead to a plethora of competition theories and frameworks. Still, the concept of (...) competition remains conceptually complex, as none of these approaches properly formalized their assumptions. In this paper, we address this issue by means of an ontological analysis on the notion of competition in general, and of business competition, in particular, leveraging theories from various fields, including Marketing, Strategic Management, Ecology, Psychology and Cognitive Sciences. Our analysis, the first of its kind in the literature, is grounded on the Unified Foundational Ontology (UFO) and allows us to formally characterize why competition arises, as well as to distinguish between three types of business competitive relationships, namely market-level, firm-level and potential competition. (shrink)
Product structures are represented in engineering models by depicting and linking components, features and assemblies. Their understanding requires knowledge of both design and manufacturing practices, and yet further contextual reasoning is needed to read them correctly. Since these representations are essen- tial to the engineering activities, the lack of a clear and explicit semantics of these models hampers the use of information systems for their assessment and exploita- tion. We study this problem by identifying different interpretations of structure rep- resentations, (...) and then discuss the formal properties that a suitable language needs for representing components, features and combinations of these. We show that the representation of components and features require a non-standard mereology. (shrink)
Forests, cars and orchestras are very different ontological entities, and yet very similar in some aspects. The relationships they have with the elements they are composed of is often assumed to be reducible to standard ontological relations, like parthood and constitution, but how this could be done is still debated. This paper sheds light on the issue starting from a linguistic and philosophical analysis aimed at understanding notions like plurality, collective and composite, and propos- ing a formal approach to characterise (...) them. We conclude the presentation with a discussion and analysis of social groups within this framework. (shrink)
Towards a Cognitive Semantics of Type.Daniele Porello & Giancarlo Guizzardi - 2017 - In AI*IA 2017 Advances in Artificial Intelligence - XVIth International Conference of the Italian Association for Artificial Intelligence, Bari, Italy, November 14-17, 2017, Proceedings. Lecture Notes in Computer Science 10640. pp. 428-440.details
Types are a crucial concept in conceptual modelling, logic, and knowledge representation as they are an ubiquitous device to un- derstand and formalise the classification of objects. We propose a logical treatment of types based on a cognitively inspired modelling that ac- counts for the amount of information that is actually available to a cer- tain agent in the task of classification. We develop a predicative modal logic whose semantics is based on conceptual spaces that model the ac- tual information (...) that a cognitive agent has about objects, types, and the classification of an object under a certain type. In particular, we ac- count for possible failures in the classification, for the lack of sufficient information, and for some aspects related to vagueness. (shrink)
In this paper we give some formal examples of ideas developed by Penco in two papers on the tension inside Frege's notion of sense (see Penco 2003). The paper attempts to compose the tension between semantic and cognitive aspects of sense, through the idea of sense as proof or procedure – not as an alternative to the idea of sense as truth condition, but as complementary to it (as it happens sometimes in the old tradition of procedural semantics).
We present a number of modal logics to reason about group norms. As a preliminary step, we discuss the ontological status of the group to which the norms are applied, by adapting the classification made by Christian List of collective attitudes into aggregated, common, and corporate attitudes. Accordingly, we shall introduce modality to capture aggregated, common, and corporate group norms. We investigate then the principles for reasoning about those types of modalities. Finally, we discuss the relationship between group norms and (...) types of collective responsibility. (shrink)
A thorough understanding of what needs are is fundamental for design- ing well-behaved information systems for many social applications and in partic- ular for public services. Talking about needs pervades indeed the jargon of Public Administrations when motivating their service offering. In this paper, we propose an ontological analysis of needs, aiming at a principled disentangling of the differ- ent uses of the term. We leverage philosophical tradition on intentionality, for its rich understanding of mental entities, we compare it with (...) the well-established BDI (Belief-Desire-Intention) tradition in knowledge representation, and we propose a formalisation of needs within the foundational ontology DOLCE. Throughout the paper, we motivate our analysis focusing on needs in public services. (shrink)
Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...) the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms. (shrink)
Axiom weakening is a technique that allows for a fine-grained repair of inconsistent ontologies. Its main advantage is that it repairs on- tologies by making axioms less restrictive rather than by deleting them, employing the use of refinement operators. In this paper, we build on pre- viously introduced axiom weakening for ALC, and make it much more irresistible by extending its definitions to deal with SROIQ, the expressive and decidable description logic underlying OWL 2 DL. We extend the definitions of (...) refinement operator to deal with SROIQ constructs, in particular with role hierarchies, cardinality constraints and nominals, and illustrate its application. Finally, we discuss the problem of termi- nation of an iterated weakening procedure. (shrink)