Libertarian Papers is delighted to welcome Jakub Wiśniewski to our editorial board. Jakub Bożydar Wiśniewski is a four-time summer fellow at the Ludwig von Mises Institute, a three-time fellow at the Institute for Humane Studies, an affiliated lecturer with the Polish-American Leadership Academy, and an affiliated lecturer and a member of the Board of Trustees of the Ludwig von Mises ….
What is creativity? It is clearly something we know by seeing it manifested in a multitude of different ways and contexts. It could perhaps stand as an emblematic example of the limitations of a general explanative account. In this anthology the editors have orchestrated an exceptionally inspiring collection of essays that explore the vast examples of creative language used in Wittgenstein's philosophical practice and the creative potentiality of language overall. The anthology consists of eleven essays divided into introduction, overture, and (...) three parts containing three essays each. The collection offers a wide scope, ranging from styles of writing and aesthetic forms of expression to ethical reflections and... (shrink)
AESTHETIC THEORY AS PROTODECONSTRUCTION In his essay Jakub Momro points out that the later work of Theodor Adorno is impossible to understand without taking into consideration its relation to Hegel’s philosophy. Adorno discovers in Hegel’s formulations a way to transcend positively oriented dialectics which, although it treats negativity with due seriousness, does not underscore its importance in thinking about the nature of the object and the object’s relation to subjectivity. From Hegelian logic Adorno draws out the fundamental concept of (...) his own philosophy, namely – “the logic of non-identity”. This leads him to ascertain the experiential, material and mediated character of metaphysical experience. In both cases Adorno’s thinking runs along the lines of a creative deconstruction of the concept of totality. (shrink)
One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can (...) make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals. (shrink)
Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings , with the conventional taxonomy of certainty, the perceptual awareness scale , through which participants categorize “raw” visual experience, and post-decision wagering , which involves economic categorization. Our results show that the CR (...) measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. (shrink)
We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.<br>In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and (...) explanatory power of recent neuroimaging studies as well as provides<br>evidence. (shrink)
In the dissertation we study the complexity of generalized quantifiers in natural language. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. -/- In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in polynomial time. (...) Additionally, we suggest that plausible semantic theories of the everyday fragment of natural language can be formulated in the existential fragment of second-order logic. -/- In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. -/- In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. -/- In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. -/- In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in natural language. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in natural language. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. -/- In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. -/- In Chapter 7 we discuss a computational semantics for monadic quantifiers in natural language. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. -/- In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. -/- In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science. (shrink)
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or memory, to (...) be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. -/- We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to investigate semantic (...) distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
We consider the notion of everyday language. We claim that everyday language is semantically bounded by the properties expressible in the existential fragment of second–order logic. Two arguments for this thesis are formulated. Firstly, we show that so–called Barwise's test of negation normality works properly only when assuming our main thesis. Secondly, we discuss the argument from practical computability for finite universes. Everyday language sentences are directly or indirectly verifiable. We show that in both cases they are bounded by second–order (...) existential properties. Moreover, there are known examples of everyday language sentences which are the most difficult in this class (NPTIME–complete). (shrink)
The main aim of this essay is to show that, for Stevens, the concept of reality is very fluctuating. The essay begins with addressing the relationship between poetry and philosophy. I argue, contra Critchley, that Stevens’ poetic work can elucidate, or at least help us to understand better, the ideas of philosophers that are usually considered obscure. The main “obscure” philosophical work introduced in and discussed throughout the essay is Schelling’s System of Transcendental Idealism. Both a (shellingian) philosopher and a (...) (stevensian) poet search for reality. In order to understand Stevens’ poetry better, I distingush several concepts of reality: initial reality (the external world of the common sense), imagined reality (a fiction, a product of one’s mind), final reality (the object of a philosopher’s and a poet’s search) and total reality (the sum of all realities, Being). These determinations are fixed by reason (in the present essay), whereas in Stevens’ poetic works, they are made fluid by the imagination. This fluidity leads the concept of reality from its initial stage through the imagined stage to its final stage. Throughout this process, imagined reality must be distinguished from both a mere fancy and its products. Final reality is, however, nothing transcendent. It is rather a general transpersonal order of reality created by poetry/the imagination. The main peculiarity of final reality is that it is a dynamic order. It is provisional at each moment. Stevens (and Schelling too) characterizes this order as that of a work of art which is a finite object, but has an infinite meaning. Stevens calls this order “the central poem” or the “endlessly elaborating poem”. If ultimate reality is a poem created by the imagination, one may ask who is the imagining subject. I argue that this agent is best to be thought as total reality, that is, as Being. Stevens, however, maintains that if there were such an agency, it would be an inhuman agency, “an inhuman meditation”. The essay concludes, in a Derridian manner, with the claim that this agency cannot have any name; it is the “unnamed creator of an unknown sphere, / Unknown as yet, unknowable, / Uncertain certainty” (OP: 127). It is best thought as an X, as an unknown variable. Being has no name. (shrink)
Next SectionWe discuss the thesis formulated by Hintikka (1973) that certain natural language sentences require non-linear quantification to express their meaning. We investigate sentences with combinations of quantifiers similar to Hintikka's examples and propose a novel alternative reading expressible by linear formulae. This interpretation is based on linguistic and logical observations. We report on our experiments showing that people tend to interpret sentences similar to Hintikka sentence in a way consistent with our interpretation.
For G a group definable in some structure M, we define notions of “definable” compactification of G and “definable” action of G on a compact space X , where the latter is under a definability of types assumption on M. We describe the universal definable compactification of G as View the MathML source and the universal definable G-ambit as the type space SG. We also point out the existence and uniqueness of “universal minimal definable G-flows”, and discuss issues of amenability (...) and extreme amenability in this definable category, with a characterization of the latter. For the sake of completeness we also describe the universal compactification and universal G-ambit in model-theoretic terms, when G is a topological group. (shrink)
Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works, the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo techniques, rather than by relying on the mix of (...) manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. (shrink)