Piotr Cyciura Human Freedom and MetaphysicsHuman freedom is in a sense the topic of metaphysics. Freedom is usually regarded as what is the most perfect in the reality. For man being perfect is the same as acting in the way which is proper for him only, viz., acting in a rational manner. However, what man desires is more perfect than what he is. The human existence is participated whereas what man aims at in the most rational manner is the (...) Existence itself. Therefore God (as loved and known) guarantees the human freedom. The value is everything that brings us closer to Him – everything save God and persons – provided we offer it to Him. It is our choice whether we make whatever is ours obey to God or not. However, what is eventually achieved is not what is ours but we ourselves – living in the Life itself. There are no borders for the human freedom in the physical world; whatever bears upon us, i.e. what is natural, could become an offering, whereas what is offered to Him in the way He cannot reject it is the only absolute value. Keywords: freedom, values, metaphysics, God. (shrink)
Following on the arguments adumbrated in his previous works, Piotr Hoffman here argues that the notion of and concern with violence are not limited to political philosophy but in fact form the essential component of philosophy in general. The acute awareness of the ever-present possibility of violence, Hoffman claims, filters into and informs ontology and epistemology in ways that require careful analysis. In his previous book, Doubt, Time, Violence , Hoffman explored the theme of violence in relation to Descartes' (...) problematic of doubt and Heidegger's work on temporality. The pivotal notion deriving from that investigation is the notion of the other as the ultimate limit of one's powers. In effect, Hoffman argues, our practical mastery of the natural environment still leaves intact the limitation of human agents by each other. In a violent environment, the other emerges as an insurmountable obstacle to one's aims and purposes or as an inescapable danger which one is powerless to hold at bay. The other is thus the focus of an ultimate resistance to one's powers. The special status of the other, as Hoffman articulates it, is at the root of several key notions around which modern philosophy has built its problematic. Arguing here that when the theme of violence is taken into account many conceptual tensions and puzzles receive satisfying solutions, Hoffman traces the theme through the issue of things versus properties; through Kant's treatment of causality, necessity, and freedom in the Critique of Pure Reason; and through the early parts of Hegel's Logic. The result is a complete reorientation and reinterpretation of these important texts. Violence in Modern Philosophy offers patient and careful textual clarification in light of Hoffman's central thesis regarding the other as ultimate limit. With a high level of originality, he shows that the theme of violence is the hidden impulse behind much of modern philosophy. Hoffman's unique stress on the constitutive importance of violence also offers a challenge to the dominant "compatibilist" tradition in moral and political theory. Of great interest to all philosophers, this work will also provide fresh insights to anthropologists and all those in the social sciences and humanities who occupy themselves with the general theory of culture. (shrink)
In the paper I explore the relations between a relatively new and quickly expanding branch of artificial intelligence –- the automated discovery systems –- and some new views advanced in the old debate over scientific realism. I focus my attention on one such system, GELL-MANN, designed in 1990 at Wichita State University. The program's task was to analyze elementary particle data available in 1964 and formulate an hypothesis (or hypotheses) about a `hidden', more simple structure of matter, or to put (...) it in contemporary terms –- the discovery of quarks. The central thesis of my paper is that systems like GELL-MANN not only discover (or rediscover) the hidden structure of matter, but also provide independent strong evidence in favor of scientific realism about entities involved in that structure. I make an attempt to show how an argument for scientific realism about sub-microscopic entities can be constructed that would parallel Ian Hacking's `argument from coincidence' presented with respect to microscopic objects in his famous book Representing and Intervening. (shrink)
Was Heidegger a 'realist' or an 'idealist'? The issue has been and continues to be hotly debated in Heidegger scholarship. Here it is argued that the much more desirable realistic interpretation of Heidegger can be sustained, provided his theory of moods is given its due. Moods, I argue, are not only 'equiprimordial' with Dasein's understanding of being, but are also irreducible to the latter. It is often held - correctly, as it seems to the author - that Heidegger's idealism is (...) all but inevitable if Dasein's awareness of entities is grounded only in Dasein's understanding of being. But in Being and Time Heidegger speaks also of how what there is is 'disclosed moodwise'. The essay closely analyzes this specifically moody mode of disclosure, and shows both its autonomy vis-à-vis the understanding of being and its function of securing, for Dasein, an access to a truly independent reality. (shrink)
In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings (...) of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding. (shrink)
The truly philosophical issue in machine conscioiusness is whether machines can have 'hard consciounsess' (like in Chalmers' hard problem of consciousness). Criteria for hard consciousness are higher than for phenomenal consciousness, since the latter incorporates first-person functional consciousness.
English title: Master Eckhart’s God Confronted with Nietzschean Critique of Christianity. Author tries to demonstrate that the way of thinking about Christian God developed in the late Middle Ages by Master Eckhart goes beyond the interpretation which underlies Nietzsche’s criticism of Christianity as a religion of the other world. In the paper, Author first presents the said criticism, followed by the vision of God outlined by Eckhart. He demonstrates that Christianity, criticized by Nietzsche, uses a commonsense vision of God’s transcendence (...) based on spatial images. The author also demonstrates that Eckhart defines this transcendence in such a way that it does not fall under Nietzsche’s criticism, in particular it cannot lead to the depreciation of worldliness in favour of an invented other world, which Nietzsche observes. Eckhart’s thought makes room for Christianity ‘after Nietzsche.’. (shrink)
The paper considers whether virtue ethics should be regarded as excluding duty ethics or any of its essential elements. The argument suggested here consists of two steps: (1) an argument that there are two different versions of virtue ethics (moderate and strong) and that moderate virtue ethics does not exclude the duty ethics; (2) an analysis of various difficulties with the strong version of virtue ethics, which shows that moderate virtue ethics is more plausible because of its capacity to avoid (...) these difficulties. This capacity makes moderate virtue ethics more attractive as an ethical theory because it covers the entire range of moral phenomena described both by strong versions of virtue ethics and by duty ethics without the attendant difficulties. (shrink)
The aim of the thesis is to provide the foundations for a representation-based theory of meaning, i.e. a theory of meaning that encompasses the psychological level of cognitive representations. This is in opposition to the antipsychologist goals of the Fregean philosophy of language and represents the results of a joint analysis of multiple philosophical problems in contemporary philosophy of language, which, as argued in the tesis, stem from the lack of recognition of a cognitive level in language. In the thesis, (...) I first provide and argue a definition of cognitive/mental representations based on results in developmental psychology as well as theoretical considerations. Then, I use the definition to build upon it a richer theory of concepts and apply it to various philosophical conundra. The problems tackled include the problem of proper names (for which a solution is proposed that respects the post-Kripkean criticism), a unified meaning postulate for modalities and the epistemology and ontology of mathematical terms. The thesis concludes with a proposed application of the newly acquired framework to select social aspects of language use. (shrink)
In the paper we discuss criticisms against David Armstrong’s general theory of truthmaking by Gonzalo Rodriguez-Pereyra, Peter Schulte and Benjamin Schnieder, and conclude that Armstrong’s theory survives these criticisms. Special attention is given to the problems concerning Entailment Principle, Conjunction Thesis, Disjunction Thesis and to the notion of explanation.
Using randomly generated sequences of binary events we asked participants to make predictions about the next event. It turned out that while predicting uncertain events, people do not behave unsystematically. Our research identifies four types of relatively consistent strategies for predicting uncertain binary events: a strategy immune to short-run sequential dependencies consisting of the persistent prediction of long-run majority events, hereafter called the long-run momentum strategy ; a strategy immune to short-run sequential dependencies consisting of the persistent prediction of long-run (...) minority events, called the long-run contrarian strategy ; a strategy sensitive to short-run sequential dependencies consisting of the prediction of short-run majority events, called the short-run momentum strategy ; and a strategy sensitive to short-run sequential dependencies consisting of the prediction of short-run minority events, called the short-run contrarian strategy . When the character of events remains unknown, the most common strategy is the short-run momentum strategy. With the increase of a perceived randomness of the situation, people tend more often to use the short-run contrarian strategy. People differ in their general beliefs about the continuation or reversal of a trend in various natural and social processes. Trend believers, when facing sequences of binary events commonly perceived as random, tend to use momentum strategies, whereas those who believe in the trend's reversal tend to use contrarian strategies. (shrink)