Foundational ontologies, central constructs in ontological investigations and engineering alike, are based on ontological categories. Firstly proposed by Aristotle as the very ur- elements from which the whole of reality can be derived, they are not easy to identify, let alone partition and/or hierarchize; in particular, the question of their number poses serious challenges. The late medieval philosopher Dietrich of Freiberg wrote around 1286 a tutorial that can help us today with this exceedingly difficult task. In this paper, I discuss (...) ontological categories and their importance for foundational ontologies from both the contemporary perspective and the original Aristotelian viewpoint, I provide the translation from the Latin into English of Dietrich's De origine II with an introductory elaboration, and I extract a foundational ontology–that is in fact a single-category one–from this text rooted in Dietrich's specification of types of subjecthood and his conception of intentionality as causal operation. (shrink)
The van Wijngaarden grammars are two-level grammars that present many interesting properties. In the present article I elaborate on six of these properties, to wit, (i) their being constituted by two grammars, (ii) their ability to generate (possibly infinitely many) strict languages and their own metalanguage, (iii) their context-sensitivity, (iv) their high descriptive power, (v) their productivity, or the ability to generate an infinite number of production rules, and (vi) their equivalence with the unrestricted, or Type-0, Chomsky grammars.
Ontologies are some of the most central constructs in today's large plethora of knowledge technologies, namely in the context of the semantic web. As their coinage indicates, they are direct heirs to the ontological investigations in the long Western philosophical tradition, but it is not easy to make bridges between them. Contemporary ontological commitments often take causality as a central aspect for the ur-segregation of entities, especially in scientific upper ontologies; theories of causality and philosophical ontological investigations often go hand-in-hand, (...) and were essentially inseparable in medieval thought. This constitutes the foundation for a bridge, and this article analyzes the causality-based ontology of the late medieval philosopher Dietrich of Freiberg from the viewpoint of today's upper-ontology engineering. In this bridging attempt, it offers a translation into English of the first part of Dietrich's De origine (abbreviated title) that is a compromise between traditional scholarly translations of medieval Latin philosophical texts and contemporary ontology. (shrink)
For millennia, knowledge has eluded a precise definition. The industrialization of knowledge (IoK) and the associated proliferation of the so-called knowledge communities in the last few decades caused this state of affairs to deteriorate, namely by creating a trio composed of data, knowledge, and information (DIK) that is not unlike the aporia of the trinity in philosophy. This calls for a general theory of knowledge (ToK) that can work as a foundation for a science of knowledge (SoK) and additionally distinguishes (...) knowledge from both data and information. In this paper, I attempt to sketch this generality via the establishing of both knowledge structures and knowledge systems that can then be adopted/adapted by the diverse communities for the respective knowledge technologies and practices. This is achieved by means of a formal–indeed, mathematical–approach to epistemological matters a.k.a. formal epistemology. The corresponding application focus is on knowledge systems implementable as computer programs. (shrink)
Genera, typically hand-in-hand with their branching species, are essential elements of vocabulary-based information constructs, in particular scientific taxonomies. Should they also feature in formal ontologies, the highest of such constructs? I argue in this article that the answer is “Yes” and that the question posed in its title also has a Yes-answer: The way medieval ontologists sliced up the world into genera does matter to formal ontology. More specifically, the way Dietrich of Freiberg, a Latin scholastic, conceived and applied strictly (...) generic criteria to slice up the world into its entities can provide some guidelines to the field of formal ontology with respect to not only its contents, but also its scope. In particular, Dietrich's information criterion plays here a central role. (shrink)
Not focusing on the history of classical logic, this book provides discussions and quotes central passages on its origins and development, namely from a philosophical perspective. Not being a book in mathematical logic, it takes formal logic from an essentially mathematical perspective. Biased towards a computational approach, with SAT and VAL as its backbone, this is an introduction to logic that covers essential aspects of the three branches of logic, to wit, philosophical, mathematical, and computational.
A. Newell and H. A. Simon were two of the most influential scientists in the emerging field of artificial intelligence (AI) in the late 1950s through to the early 1990s. This paper reviews their crucial contribution to this field, namely to symbolic AI. This contribution was constituted mostly by their quest for the implementation of general intelligence and (commonsense) knowledge in artificial thinking or reasoning artifacts, a project they shared with many other scientists but that in their case was theoretically (...) based on the idiosyncratic notions of symbol systems and the representational abilities they give rise to, in particular with respect to knowledge. While focusing on the period 1956-1982, this review cites both earlier and later literature and it attempts to make visible their potential relevance to today's greatest unifying AI challenge, to wit, the design of wholly autonomous artificial agents (a.k.a. robots) that are not only rational and ethical, but also self-conscious. (shrink)
Expressiveness and decidability are two core aspects of programming languages that should be thoroughly known by those who use them; this includes knowledge of their metalanguages a.k.a. formal grammars. The van Wijngaarden grammars (WGs) are capable of generating all the languages in the Chomsky hierarchy and beyond; this makes them a relevant tool in the design of (more) expressive programming languages. But this expressiveness comes at a very high cost: The syntax of WGs is extremely complex and the decision problem (...) for the generated languages is generally unsolvable. With this in mind, I provide here a short primer of the syntax of WGs, which includes syntactic restrictions that guarantee decidability for the corresponding generated languages. (shrink)
The concept of unconscious knowledge is fundamental for an understanding of human thought processes and mentation in general; however, the psychological community at large is not familiar with it. This paper offers a survey of the main psychological research currently being carried out into cognitive processes, and examines pathways that can be integrated into a discipline of unconscious knowledge. It shows that the field has already a defined history and discusses some of the features that all kinds of unconscious knowledge (...) seem to share at a deeper level. With the aim of promoting further research, we discuss the main challenges which the postulation of unconscious cognition faces within the psychological community. (shrink)
Contemporary studies in unconscious cognition are essentially founded on dissociation, i.e., on how it dissociates with respect to conscious mental processes and representations. This is claimed to be in so many and diverse ways that one is often lost in dissociation. In order to reduce this state of confusion we here carry out two major tasks: based on the central distinction between cognitive processes and representations, we identify and isolate the main dissociation paradigms; we then critically analyze their key tenets (...) and reported findings. (shrink)
2nd edition. Many-valued logics are those logics that have more than the two classical truth values, to wit, true and false; in fact, they can have from three to infinitely many truth values. This property, together with truth-functionality, provides a powerful formalism to reason in settings where classical logic—as well as other non-classical logics—is of no avail. Indeed, originally motivated by philosophical concerns, these logics soon proved relevant for a plethora of applications ranging from switching theory to cognitive modeling, and (...) they are today in more demand than ever, due to the realization that inconsistency and vagueness in knowledge bases and information processes are not only inevitable and acceptable, but also perhaps welcome. The main modern applications of (any) logic are to be found in the digital computer, and we thus require the practical knowledge how to computerize—which also means automate—decisions (i.e. reasoning) in many-valued logics. This, in turn, necessitates a mathematical foundation for these logics. This book provides both these mathematical foundation and practical knowledge in a rigorous, yet accessible, text, while at the same time situating these logics in the context of the satisfiability problem (SAT) and automated deduction. The main text is complemented with a large selection of exercises, a plus for the reader wishing to not only learn about, but also do something with, many-valued logics. (shrink)
Reasoning over our knowledge bases and theories often requires non-deductive inferences, especially – but by no means only – when commonsense reasoning is the case, i.e. when practical agency is called for. This kind of reasoning can be adequately formalized via the notion of supraclassical consequence, a non-deductive consequence tightly associated with default and non-monotonic reasoning and featuring centrally in abductive, inductive, and probabilistic logical systems. In this paper, we analyze core concepts and problems of these systems in the light (...) of supraclassical consequence. (shrink)
The representational nature of human cognition and thought in general has been a source of controversies. This is particularly so in the context of studies of unconscious cognition, in which representations tend to be ontologically and structurally segregated with regard to their conscious status. However, it appears evolutionarily and developmentally unwarranted to posit such segregations, as,otherwise, artifact structures and ontologies must be concocted to explain them from the viewpoint of the human cognitive architecture. Here, from a by-and-large Classical cognitivist viewpoint, (...) I show why this segregation is wrong, and elaborate on the need to postulate an ontological and structural continuity between unconscious and conscious representations. Specifically, I hypothesize that this continuity is to be found in the symbolic-based interplay between the syntax and the semantics of thought, and I propose a model of human information processing characterized by the integration of syntactic and semantic representations. (shrink)
Since Freud and his co-author Breuer spoke of dissociation in 1895, a scientific paradigm was painstakingly established in the field of unconscious cognition. This is the dissociation paradigm. However, recent critical analysis of the many and various reported dissociations reveals their blurred, or unveridical, character. Moreover, we remain ignorant with respect to the ways cognitive phenomena transition from consciousness to an unconscious mode. This hinders us from filling in the puzzle of the unified mind. We conclude that we have reached (...) a Kuhnian crisis in the field of unconscious cognition, and we predict that new models, incorporating partly the relevant findings of the dissociation paradigm—but also of dynamic psychology—, will soon be established. We further predict that some of these models will be largely based on the pairs representation–process and analog–digital. (shrink)
The traditional model of human cognition (TMHC) postulates an ontological and/or structural gap between conscious and unconscious mental representations. By and large, it sees higher-level mental processes as commonly conceptual or symbolic in nature and therefore conscious, whereas unconscious, lower-level representations are conceived as non-conceptual or sub-symbolic. However, experimental evidence belies this model, suggesting that higher-level mental processes can be, and often are, carried out in a wholly unconscious way and/or without conceptual representations, and that these can be processed unconsciously. (...) This entails that the TMHC, as well as the theories on mental representation it motivates and that in turn support it, is wrong. (shrink)
Shared conceptualization, in the sense we take it here, is as recent a notion as the Semantic Web, but its relevance for a large variety of fields requires efficient methods of extraction and representation for both quantitative and qualitative data. This notion is particularly relevant for the investigation into, and construction of, semantic structures such as knowledge bases and taxonomies, but given the required large, often inaccurate, corpora available for search we can get only approximations. We see fuzzy description logic (...) as an adequate medium for the representation of human semantic knowledge and propose a means to couple it with fuzzy semantic networks via the propositional Łukasiewicz fuzzy logic such that these suffice for decidability for queries over a semantic-knowledge base such as “to what degree of sharedness does it entail the instantiation C(a) for some concept C” or “what are the roles R that connect the individuals a and b to degree of sharedness ε.” . (shrink)
All research is immersed in the competition for knowledge, but this is not always governed by fairness. In this opinion article, I elaborate on indicators of unfairness to be found in both evaluation guides and evaluation panels, and I spontaneously offer a number of rules of thumb meant to keep it at bay. Although they are explicitly offered to the Portuguese Foundation for Science and Technology (FCT) and in particular to the evaluation panel for Philosophy, Ethics and Religion of FCT's (...) fifth edition of their Individual Call to Scientific Employment Stimulus, here used as concrete illustrations of unfairness, my rules of thumb are guaranteed to promote fairness in the competition for knowledge in general. (shrink)
Although formal thought disorder (FTD) has been for long a clinical label in the assessment of some psychiatric disorders, in particular of schizophrenia, it remains a source of controversy, mostly because it is hard to say what exactly the “formal” in FTD refers to. We see anomalous processing of terminological knowledge, a core construct of human knowledge in general, behind FTD symptoms and we approach this anomaly from a strictly formal perspective. More specifically, we present here a symbolic computational model (...) of storage in, and activation of, a human semantic network, or semantic memory, whose core element is logical form; this is normalized by description logic (DL), namely by CL, a DL-based language – Conception Language – designed to formalize conceptualization from the viewpoint of individual cognitive agency. In this model, disruptions in the rule-based implementation of the logical form account for the apparently semantic anomalies symptomatic of FTD, which are detected by means of a CL-based algorithmic assessment. (shrink)
The relations between ontology and information are many and fundamental, and they help us to understand the present gulf between (formal) ontology and (philosophical) Ontology: We can speak of respectively ontology-driven information and information-driven ontology as the focus on being informed vs. informed being. The question of whether these two (can) coincide is relevant to both fields, and in this article I elaborate on what needs to be addressed first of all to provide us with an answer: The form. This (...) core ontological concept rooting in Aristotelian metaphysics was central to philosophical ontology, in particular in Latin Scholasticism, when it was clearly put into relation with information as that which defines an entity. In this context, Dietrich of Freiberg synthesized this long debate in a way that matters not only to the philosophical effort of producing information-driven ontologies but also to the engineering constructs of ontology-driven information systems. (shrink)
Formal thought disorder (FTD) is a clinical mental condition that is typically diagnosable by the speech productions of patients. However, this has been a vexing condition for the clinical community, as it is not at all easy to determine what “formal” means in the plethora of symptoms exhibited. We present a logic-based model for the syntax–semantics interface in semantic networking that can not only explain, but also diagnose, FTD. Our model is based on description logic (DL), which is well known (...) for its adequacy to model terminological knowledge. More specifically, we show how faulty logical form as defined in DL-based Conception Language (CL) impacts the semantic content of linguistic productions that are characteristic of FTD. We accordingly call this the dyssyntax model. (shrink)
In Who's Afraid of Idealism? the philosophical concept of idealism, the extent to which reality is mind-made, is examined in new light. Author Luis M. Augusto explores epistemological idealism, at the source of all other kinds of idealism, from the viewpoints of Immanuel Kant and Friedrich Nietzsche, two philosophers who spent a large part of their lives denigrating the very concept. Working from Kant and Nietzsche's viewpoints that idealism was a scandal to philosophy and the cause of nihilism, Augusto evaluates (...) these philosophers and their role in shaping epistemological idealism. Using textual evidence from their writings and their reactions to western philosophers such as Plato, Descartes, and Hegel, Who's Afraid of Idealism? argues that in fact Kant and Nietzsche were really idealists at heart. In accessible prose, this text puts forward a theory that goes against current scholarly opinion, and even Kant and Nietzsche's opinions of themselves. (shrink)
The definition of knowledge as justified true belief is the best we presently have. However, the canonical tripartite analysis of knowledge does not do justice to it due to a Platonic conception of a priori truth that puts the cart before the horse. Within a pragmatic approach, I argue that by doing away with a priori truth, namely by submitting truth to justification, and by accordingly altering the canonical analysis of knowledge, this is a fruitful definition. So fruitful indeed that (...) it renders the Gettier counterexamples vacuous, allowing positive work in epistemology and related disciplines. (shrink)
Given the evidence available today, we know that the later Middle Ages knew strong forms of idealism. However, Plato alone will not do to explain some of its features. Aristotle was the most important philosophical authority in the thirteenth and fourteenth centuries, but until now no one dared explore in his thought the roots of this idealism because of the dogma of realism surrounding him. I challenge this dogma, showing that the Stagirite contained in his thought the roots of idealist (...) aspects that will be developed, namely by Dietrich of Freiberg and Eckhart of Hochheim, into a fully idealist epistemology. (shrink)
In this editorial, I explain how Paul Feyerabend's Principle of Proliferation is adopted and adapted as a publication model for the Journal of Knowledge Structures and Systems (JKSS). Critical views on the limitations of both non-dynamic publishing models and government- and industry-based models of research are expressed.
Translation from the Latin into Portuguese, with extensive introduction and notes, of Dietrich of Freiberg's De origine rerum praedicamentalium, Chapters 1 and 2. This text, a late medieval treatise on reality and human cognition (or human cognition and reality), is a particularly hard nut to crack; hence my having translated it (O.K., I also enjoyed the Latin part).
More often than not, theories of belief and of belief ascription restrict themselves to conscious beliefs, thus obliterating a vast part of our mental life and offering extremely incomplete, unrealistic theories. Indeed, conscious beliefs are the exception, not the rule, as far as human doxastic states are concerned, and a naturalistic, realistic theory of knowledge that aspires to completeness has to take unconscious beliefs into consideration. This paper is the elaboration of such a theory of belief.
Translation from the Latin into Portuguese, with extensive introduction and notes, of Dietrich of Freiberg's De origine rerum praedicamentalium, Chapter 5. This text, a late medieval treatise on reality and human cognition (or human cognition and reality), is a particularly hard nut to crack; hence my having translated it (O.K., I also enjoyed the Latin part).
Translation from the Latin into Portuguese, with extensive introduction and notes, of Dietrich of Freiberg's De origine rerum praedicamentalium, Chapters 3 and 4. This text, a late medieval treatise on reality and human cognition (or human cognition and reality), is a particularly hard nut to crack; hence my having translated it (O.K., I also enjoyed the Latin part).
This is the 3rd edition. Although a number of new technological applications require classical deductive computation with non-classical logics, many key technologies still do well—or exclusively, for that matter—with classical logic. In this first volume, we elaborate on classical deductive computing with classical logic. The objective of the main text is to provide the reader with a thorough elaboration on both classical computing – a.k.a. formal languages and automata theory – and classical deduction with the classical first-order predicate calculus with (...) a view to computational implementations, namely in automated theorem proving and logic programming. The present third edition improves on the previous ones by providing an altogether more algorithmic approach: There is now a wholly new section on algorithms and there are in total fourteen clearly isolated algorithms designed in pseudo-code. Other improvements are, for instance, an emphasis on functions in Chapter 1 and more exercises with Turing machines. (shrink)
Introduction - From the Illiad to the Studies on Hysteria: A chronology of the discovery of the unconscious mind - Freud's theories of the unconscious mind - Jung's collective unconscious - Lacan's linguistic paradigm.
Just started a new book. The aim is to establish a science of knowledge in the same way that we have a science of physics or a science of materials. This might appear as an overly ambitious, possibly arrogant, objective, but bear with me. On the day I am beginning to write it–June 7th, 2020–, I think I am in possession of a few things that will help me to achieve this objective. Again, bear with me. My aim is well (...) reflected in the title I chose (just now) for this book: Knowledge & Logic: Towards a science of knowledge. Its most important feature is that I shall take logic to be to knowledge science as calculus is to physics or to materials science. I do not intend to reclaim knowledge from the bosom of philosophy, in which, known as epistemology its erudite discussion has hardly progressed since Plato first defined it as true belief with logos. With only a few adjustments, it will actually provide me with the right, science-bound start. More recently, knowledge has been reclaimed by the field of BA, a reclaim that has opened the box of Pandora: Among the evils, and perhaps at the head of the list, is an overly lay, essentially naive, notion of knowledge. But the very idea that one can have something like “knowledge (management) software” puts us on the right track. (shrink)
Logic has been a—disputed—ingredient in the emergence and development of the now very large field known as knowledge representation and reasoning. In this book (in progress), I select some central topics in this highly fruitful, albeit controversial, association (e.g., non-monotonic reasoning, implicit belief, logical omniscience, closed world assumption), identifying their sources and analyzing/explaining their elaboration in highly influential published work.
This is a mathematical and computational intro to many-valued logics. The approach is mostly mathematical, namely algebraic (via the notion of logical matrix) and computational (via the satisfiability problem). An automated calculus -- the signed resolution calculus for many-valued logics -- is elaborated on.
In this Ph.D. dissertation, completed at the Sorbonne, it is shown that the whole of medieval philosophy was not reduced to a realist stance: in the 13th-14th centuries, an idealist stance emerged and was developed into a full-fledged epistemological idealism, personified in the philosophers Eckhart von Hochheim and Dietrich von Freiberg. This dissertation deviates from most works in the history of philosophy by proposing to see this as a taxonomy.
This paper defends the view that a correct analysis of knowledge must take empirical data into consideration. The data here provided is from experimental psychology, namely from phenomena involving unconscious cognition.
Eckhart’s doctrine of the bilder is highly original not so much for containing new elements as for the conciliation it achieved among sources at first sight incompatible; these sources can be reduced to three main ones: Plato, Aristotle, and Christian thought. In this paper, I show that Eckhart’s doctrine of the bilder is simultaneously a) an Aristotelian epistemic recreation of Plato’s doctrine of ideas, and b) a Christian ontological recreation of Aristotle’s doctrine of cognition. As such, it is a technical (...) manipulation of these sources, rather than a mystical doctrine. (shrink)