At a purely instrumental level, quantum theory is all about multiplication, addition and taking mod squares of complex numbers called probability amplitudes. The rules for combining amplitudes are deceptively simple. When two or more events are independent you multiply their respective probability amplitudes and when they are mutually exclusive you add them. Whenever you want to calculate probabilities you take mod squares of respective amplitudes. That’s it. If you are prepared to ignore the explanatory power of the theory (which you (...) should not) the rest is just a set of convenient mathematical tools developed for the purpose of book-keeping of amplitudes. Thus we tabulate amplitudes into state vectors and unitary matrices, and place them in Hilbert spaces. (shrink)
Combining physics, mathematics and computer science, quantum computing has developed in the past two decades from a visionary idea to one of the most fascinating areas of quantum mechanics. The recent excitement in this lively and speculative domain of research was triggered by Peter Shor (1994) who showed how a quantum algorithm could exponentially "speed up" classical computation and factor large numbers into primes much more rapidly (at least in terms of the number of computational steps involved) than any known (...) classical algorithm. Shor's algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the last few years theoretical study of quantum systems serving as computational devices has achieved tremendous progress. Common belief has it that the implementation of Shor's algorithm on a large scale quantum computer would have devastating consequences for current cryptography protocols which rely on the premiss that all known classical worst case algorithms for factoring take time exponential in the length of their input (see, e.g., Preskill 2005). Consequently, experimentalists around the world are engaged in tremendous attempts to tackle the technological difficulties that await the realization of such a large scale quantum computer. But regardless whether these technological problems can be overcome (Unruh 1995, Ekert and Jozsa 1996, Haroche and Raimond 1996), it is noteworthy that no proof exists yet for the general superiority of quantum computers over their classical counterparts. (shrink)
The aim of this paper is to show that it is the explicativecharacter of Tarski's semantic definition of truth given in his study of 1933 that allows forconsideration of a philosophical background of this definition in the proper sense. Given the explicativecharacter of this definition it is argued that the philosophical tradition that should be taken intoaccount with regard to this philosophical background is the tradition of the Lvov-Warsaw Schoolin its connections with the School of Brentano. As an example of (...) the explanatory power ofconsidering this tradition as far as Tarski's philosophical choices are concerned I use here thenotion of sentence-inscription, i.e., the notion of that entity of which truth is predicated inthe definition in question. One of the consequences of these statements is that philosophicaldiscussions concerning the semantic definition of truth can be regarded from two points ofview. On the one hand, they may take the perspective of its explicational function, i.e., theperspective of its philosophical background. On the other hand, they might consider the philosophicalconsequences of the definition with respect to the goal of the explication, i.e., they may considerits philosophical content independently of its historical background. (shrink)
It is proposed that one musically interesting way to characterise and compare different performances or recordings of the same piece is by correlating them with different Schenkerian interpretations through the medium of grouping. This approach is demonstrated through an examination of four 'either/or' passages from the first movement of Beethoven's Piano Sonata in E Major, Op. 81a, passages in which at least two Schenkerian interpretations are possible. Schenker's own published and unpublished sketches, among others, are considered alongside recordings by Vladimir (...) Ashkenazy, Emil Gilels, Richard Goode, Murray Perahia and Artur Rubinstein. The approach is not meant to be self-sufficient, but rather to contribute a new set of tools to the emerging multidisciplinary field of performance studies. (shrink)
The methodology of external cost calculations has been continuously developed. Still, due to lack of sufficient data and imperfect analytical tools, one can give only some estimates rather than precise results. The impact of primary pollutants is dominating in the local domain. The secondary pollutants are negligible in the local domain; however, they have a strong impact in the regional domain. The impacts of primary and secondary pollutants were estimated with the best URBAN algorithm and SUWM formulation, respectively.
This paper studies methodologically robust options for giving logical contents to nodes in abstract argumentation networks. It defines a variety of notions of attack in terms of the logical contents of the nodes in a network. General properties of logics are refined both in the object level and in the metalevel to suit the needs of the application. The network-based system improves upon some of the attempts in the literature to define attacks in terms of defeasible proofs, the so-called rule-based (...) systems. We also provide a number of examples and consider a rigorous case study, which indicate that our system does not suffer from anomalies. We define consequence relations based on a notion of defeat, consider rationality postulates, and prove that one such consequence relation is consistent. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
‘Two Dogmas of Empiricism’ has quickly become a classic of analytical philosophy and has invoked the since lasting discussion about possibility of analytic/synthetic distinction. It has been also considered a nail to the coffin of logical positivism. Accordingly, Quine tried to show that logical positivism was possible solely due to assumptions taken without justification in terms of standards preached by neopositivism itself. Quine aimed to point out that since they functioned as dogmas, the rescuing of empiricism was possible only if (...) another approach was accepted, the one characterized as holism. (shrink)
The paper reviews three most widely discussed philosophical attempts of defining the notion of semantic content of information: Bar-Hillel's and Carnap's, Dretske's and Floridi's. Each of these theories is limited to a very narrow range of infor-mation phenomena. The study aimed at analysing these limits and their philosophical background.