A permissivist framework is developed to include images in the reconstruction of the evidential base and of the theoretical content. The paper uses Newton’s optical theory as a case study to discuss mathematical idealizations and depictions of experiments, together with textual correlates of diagrams. Instead of assuming some specific type of theoretical content, focus is on novel traits that are delineable when studying the carriers of a theory. The framework is developed to trace elliptic and ambiguous message design, and utilizes (...) variegated acceptance as an asset. Newton’s resources allowed for various framing modes and reconstructions, entailing various judgements concerning the theoretical content, the evidence base, and Newton’s use of mathematics. Elliptic presentation of the theory’s proof-structure and ambiguities influenced uptake, contributed to the process of opinion-polarization, and the acceptance/rejection of the theory. The study suggests that the analysed carriers of theoretical content have an argumentative function, and one of their uses is to adjust the burden of proof. (shrink)
Olaf Müller’s book develops a new case for underdetermination, and, as he is focusing on theories of a ‘limited domain’, this assumes the containability of the theories. First, the paper argues that Müller’s theory of darkness is fundamentally Newtonian, but for Newton’s optical theory the type of theoretical structure Müller adopts is problematic. Second, the paper discusses seventeenth-century challenges to Newton, changes in the proof-structure of Newton’s optical theory, and how these affect Müller’s reconstruction. Müller’s book provides empirically equivalent theories, (...) yet the historical theories were not empirically equivalent, and the same experiments were used to extract different bodies of evidence to rebut the opponent. Third, Goethe’s multi-layered critique of Newton’s experimental proof is investigated, including his developmental account of prismatic colours, the role of experimental series in rejecting Newton’s observations, and his incorporation of the ‘limited domain’ of prismatic colours in a broader framework. Two key elements of Goethe’s method, polarity and strengthening are discussed in contrast to Müller, who only utilises polarity in his account. Finally Neurath’s attempts to come to grips with the optical controversies and the prism-experiments with ‘blurred edges’ are recalled. Müller also discusses in detail some of these experiments and heavily draws on Quine. Neurath developed Duhem’s and Poincaré’s conventionalist insights and had good reasons to be pessimistic about theory-containment. Their differences provide some additions to the history of the Duhem–Quine thesis. (shrink)
The book then discusses another group of issues ("whether it is, what it is, how and why it is"), which determined the argumentation, the axiomatic ordering of the sciences, and concludes with a demonstration on the basis of concrete ...
Current philosophical reflections on science have departed from mainstream history of science with respect to both methodology and conclusions. The article investigates how different approaches to reconstructing commitments can explain these differences and facilitate a mutual understanding and communication of these two perspectives on science. Translating the differences into problems pertaining to principles of charity, the paper offers a platform for clarification and resolution of the differences between the two perspectives. The outlined contextual approach occupies a middle ground between mainstream (...) history and sociology of science, bracketing questions of rationality, and individual coherence-maximizing, rationality-centered approaches. It can satisfy those, who believe that science is an epistemically privileged endeavor, and its epistemic content should not be neglected when reconstructing the scientists’ positions. It can also satisfy those who hold that it is naive to believe that the immediate context, e.g. the challenges to a theory, the expectations of the author about his audience, etc., does not affect the position a scientist takes. Its theoretical considerations are exemplified with a close study of the debate following the 1672 publication of Newton's theory of light and colours, also offering a novel reading of the development of his methodological views concerning the demonstrativity of the famous crucial experiment. Although we only show the capacity of the framework to analyze a direct controversy, given that it is hard to think about any scientific text as detached from an argumentative context, this approach has the potential to be a general guide for interpretation. (shrink)
The Borel–Kolmogorov Paradox is typically taken to highlight a tension between our intuition that certain conditional probabilities with respect to probability zero conditioning events are well defined and the mathematical definition of conditional probability by Bayes’ formula, which loses its meaning when the conditioning event has probability zero. We argue in this paper that the theory of conditional expectations is the proper mathematical device to conditionalize and that this theory allows conditionalization with respect to probability zero events. The conditional probabilities (...) on probability zero events in the Borel–Kolmogorov Paradox also can be calculated using conditional expectations. The alleged clash arising from the fact that one obtains different values for the conditional probabilities on probability zero events depending on what conditional expectation one uses to calculate them is resolved by showing that the different conditional probabilities obtained using different conditional expectations cannot be interpreted as calculating in different parametrizations of the conditional probabilities of the same event with respect to the same conditioning conditions. We conclude that there is no clash between the correct intuition about what the conditional probabilities with respect to probability zero events are and the technically proper concept of conditionalization via conditional expectations—the Borel–Kolmogorov Paradox is just a pseudo-paradox. (shrink)
Otto Neurath’s thoroughgoing anti-foundationalism is connected to the recognition that protocol sentences are not inviolable, that is they are fallible and their choice cannot be determined: ‘Poincaré, Duhem and others have adequately shown that even if we have agreed on the protocol statements, there is a not limited number of equally applicable, possible systems of hypotheses. We have extended this tenet of the uncertainty of systems of hypotheses to all statements, including protocol statements that are alterable in principle’. Later historiography (...) has called Neurath’s extension of Duhemian holism the Neurath principle. Based on a study of Neurath’s early works on the history of optics, the paper investigates a previously unnoticed influence on the development of this principle, Neurath’s reading of Goethe’s Theory of colours. The historical and polemical parts of Goethe’s tripartite book provided Neurath with ideal examples for the vertical extension of Duhem’s thesis to observation statements. Moreover, Goethe’s critique of the language of science and his views on the theory-ladenness of observation, as well as on the history of science show strong parallels to many of Neurath’s ideas. These demonstrate the existence of surprisingly direct textual links between Romantic views on science and the development of twentieth-century philosophy of science. Neurath’s usage of Goethe’s examples also indicates that the birth of the Neurath principle is more tightly connected to actual scientific practice than to theory-testing, and that by admitting the theory-ladenness of observation reports and fallibility of protocol statements Neurath does not throw empiricism overboard.Keywords: Otto Neurath; J. W. von Goethe; Holism; Theory-ladenness; Duhem-Quine thesis; Romantic science. (shrink)
The common cause principle says that every correlation is either due to a direct causal effect linking the correlated entities or is brought about by a third factor, a so-called common cause. The principle is of central importance in the philosophy of science, especially in causal explanation, causal modeling and in the foundations of quantum physics. Written for philosophers of science, physicists and statisticians, this book contributes to the debate over the validity of the common cause principle, by proving results (...) that bring to the surface the nature of explanation by common causes. It provides a technical and mathematically rigorous examination of the notion of common cause, providing an analysis not only in terms of classical probability measure spaces, which is typical in the available literature, but in quantum probability theory as well. The authors provide numerous open problems to further the debate and encourage future research in this field. (shrink)
The study describes a method created for the analysis of persuasive strategies, called rhetorical heuristics, which can be applied in speeches where the argument focuses primarily on questions of fact. First, the author explains how the concept emerged from the study of classical oratory. Then the theoretical background of rhetorical heuristics is outlined through briefly discussing relevant aspects of the psychology of decision-making. Finally, an exposition of how one could find these persuasive strategies introduces rhetorical heuristics in more detail.
"In this book, Gabor Csepregi describes in detail the nature and scope of the body's innate abilities and reflects on their significance in human life."--BOOK JACKET.
We show that there is a restriction, or modification of the finite-variable fragments of First Order Logic in which a weak form of Craig's Interpolation Theorem holds but a strong form of this theorem does not hold. Translating these results into Algebraic Logic we obtain a finitely axiomatizable subvariety of finite dimensional Representable Cylindric Algebras that has the Strong Amalgamation Property but does not have the Superamalgamation Property. This settles a conjecture of Pigozzi [12].
The advent of functional imaging has reinforced the attempts to define dreaming as a sleep state-dependent phenomenon. PET scans revealed major differences between nonREM sleep and REM sleep. However, because dreaming occurs throughout sleep, the common features of the two sleep states, rather than the differences, could help define the prerequisite for the occurrence of dreams. [Hobson et al.; Nielsen; Solms; Revonsuo; Vertes & Eastman].
The paper first discusses the metaphysical framework that allows the soul's integration into the physical world. A close examination of B36, supported by the comparative evidence of some other early theories of the soul, suggests that the word psuchê could function as both a mass term and a count noun for Heraclitus. There is a stuff in the world, alongside other physical elements, that manifests mental functions. Humans, and possibly other beings, show mental functions in so far as they have (...) a portion of that stuff. Turning to the physical characterization of the soul, the paper argues that B36 is entirely consistent with the ancient testimonies that say that psuchê for Heraclitus is exhalation. But exhalations cover all states of matter from the lowest moist part of atmospheric air to the fire of celestial bodies. If so, psuchê for Heraclitus is both air and fire. The fact that psuchê can manifest the whole range of physical properties along the dry-wet axis guarantees that souls can show different intellectual and ethical properties as well. Moreover, Sextus Empiricus, supported by some other sources, provides us with an answer how portions of soul stuff are individuated into individual souls. The paper closes with a brief discussion of the question whether, and if so with what qualifications, we can apply the term 'physicalism' to Presocratic theories of the soul. (shrink)
The present volume has grown out of a conference organized jointly by the History of Philosophy Department of the University of Miskolc and the History and Philosophy of Science Department of Eötvös Loránd University (Budapest), which took place in June 2002. The aim of the conference was to explore the various angles from which intentionality can be studied, how it is related to other philosophical issues, and how it figures in the works of major philosophers in the past. It also (...) aimed at facilitating the interaction between the analytic and phenomenological traditions, which both regard intentionality as one of the most important problems for philosophy. Indeed intentionality has sometimes provided inspiration for works bridging the gap between the two traditions, like Roderick Chisholm’s in the sixties and Dagfin Føllesdall’s and his students’ in the early eighties. These objectives were also instrumental in the selection of the papers for this volume. Instead of very specialized papers on narrow issues, we gave preference to papers with a broader focus, which (1) juxtapose different approaches and traditions or (2) link the issues of intentionality with other philosophical concerns. (shrink)
Here we investigate the classes RCA $^\uparrow_\alpha$ of representable directed cylindric algebras of dimension α introduced by Nemeti[12]. RCA $^\uparrow_\alpha$ can be seen in two different ways: first, as an algebraic counterpart of higher order logics and second, as a cylindric algebraic analogue of Quasi-Projective Relation Algebras. We will give a new, "purely cylindric algebraic" proof for the following theorems of Nemeti: (i) RCA $^\uparrow_\alpha$ is a finitely axiomatizable variety whenever α ≥ 3 is finite and (ii) one can obtain (...) a strong representation theorem for RCA $^\uparrow_\alpha$ if one chooses an appropriate (non-well-founded) set theory as foundation of mathematics. These results provide a purely cylindric algebraic solution for the Finitization Problem (in the sense of [11]) in some non-well-founded set theories. (shrink)
In a recent book C.S. Jenkins proposes a theory of arithmetical knowledge which reconciles realism about arithmetic with the a priori character of our knowledge of it. Her basic idea is that arithmetical concepts are grounded in experience and it is through experience that they are connected to reality. I argue that the account fails because Jenkins’s central concept, the concept for grounding, is inadequate. Grounding as she defines it does not suffice for realism, and by revising the definition we (...) would abandon the idea that grounding is experiential. Her account falls prey to a problem of which Locke, whom she regards as a source of inspiration was aware and which he avoided by choosing anti-realism about mathematics. (shrink)
ArgumentThe subject of the paper is the shift from an astrology-oriented astronomy towards an allegedly more objective, mathematically grounded approach to astronomy. This shift is illustrated through a close reading of Tycho Brahe's scientific development and the contemporaneous changes in his communicational strategies. Basing the argument on a substantial array of original sources it is claimed that the Danish astronomer developed a new astronomical discourse in pursuit of credibility, giving priority to observational astronomy and natural philosophical questions. The abandonment of (...) astrology in public discourse is primarily explained by Tycho's social position and greater sensibility to controversial issues. Tycho's example suggests that the changes in rhetorical strategies regarding astrology should be given more recognition in the history of astronomy. (shrink)
the Mona Lisa, the Mondscheinsonate, the Chanson d’automne are works of art, the salt shaker on your table, the car in your garage, or the pijamas on your bed are not. the basic question of the metaphysics of works of art is this: what makes a thing a work of art? that is: what sort of property do works of art have in virtue of which they are works of art? or more simply: what sort of property being a work (...) of art is? In this paper we argue that things are works of art in virtue of what they are like, their intrinsic features, that is, in virtue of the fact that they have the perceptual (auditory, visual, etc.) properties they have. In other words: being a work of art supervenes on perceptual-intrinsic features. Currently, this metaphysical view is extremely unpopular within the philosophy of art. It is unpopular because there allegedly exists a knock-down objection to it, the well-known argument from indiscernible counterparts. our thesis implies, among other things, that every perceptual duplicate of a work of art is also a work of art. according to the argument from indiscernible counterparts, however, there could be (or even: there are) indiscernible counterparts such that one of them is a work of art while the other is not. hence things cannot be works of art solely in virtue of what they are like. Our paper divides into three parts. In the first part we state our views. In the second part we defend it against various versions of the argument from indiscernible counterparts. (In doing so our position will become more plausible, we hope). In the final part we provide some meta-reflections on the matter. (shrink)
The purpose of the paper is to draw attention to a kind of rational persuasion which has received little attention in argument studies even though its existence is acknowledged in other fields. I start with a brief analysis of the debates conducted in the comments on a philosophical blog. The posts are addressed to a non-academic audience, always end with a problem, and the reader is invited to offer a solution. In the comments we hardly ever find arguments in the (...) usual sense, i.e. in the sense that an argument consists of a set of premises providing justification for a conclusion. It is not that the arguments are laid out carelessly and require a good deal of reconstruction: typically, there is no argument to reconstruct. The author simply states his view, then goes on to sketch a larger picture of which his view is a part. In the responses to the comments we find the same: identification of the point of disagreement, no argument, elaboration of the preferred view in some detail. One might say that this is just a failure of rational discussion; in pragmadialectical terms, the discussion gets stuck in the opening stage. But there is another way of looking at the matter. Since the participants do not have sufficiently rich common background knowledge to take premises from, they cannot offer arguments proper and must resort to a different sort of rational persuasion. They try to show that their view can be extended into a larger, coherent picture. This makes good sense from an epistemological point of view. If a view is false, in trying to work out its details we sooner or later run into problems, so allowing coherent elaboration provides some degree of justification. One might say that the participants argue in the broad sense of trying to persuade by rational means, but they do that without adducing arguments proper. The claim that there is a way of rational persuasion which does not proceed by arguments proper can be further substantiated by noting that there debates whose status as rational discussion, as opposed to the comments on the philosophical blog, is not controversial. In discussing the work of some outstanding contemporary philosophers Gary Gutting argues that even though they do offer arguments, it is not by their arguments that they have convinced others of the viability of their approach, but by what he calls „persuasive elaboration”. We may also think of Kuhn’s account of the debates surrounding changes in paradigms: the arguments fail to persuade because their premises are not shared, but showing that the new paradigm can be extended to a vast variety of phenomena eventually succeeds. I finish by listing some questions this kind of rational persuasion may raise for argument studies. (shrink)
Abstract The paper is an attempt to interpret Imre Lakatos's methodology of scientific research programmes (MSRP) on the basis of his mathematical methodology, the method of proofs and refutations (MPR). After sketching MSRP and MPR and analysing their relationship to Popper's and Poly a's work, I argue that MSRP was originally conceived as a methodology in the same sense as MPR. The most conspicuous difference between the two, namely that MSRP is fundamentally backward?looking, whereas MPR is primarily forward?looking, is due (...) to the fact that Lakatos could not carry out his project in the full sense. I also explain why he could not. (shrink)
SUMMARYThis paper presents the unusual story of the efforts of the political agent and pamphleteer Kaspar Schoppe to rehabilitate Machiavelli. Unlike the few earlier attempts by Machiavelli's Florentine descendants, Schoppe's campaign was motivated by complex factors, which were in a great part related to his vision of Catholic renewal. Through the story of Schoppe's campaign for Machiavelli, this paper offers not only a novel interpretation of this fascinating figure of the Counter-Reformation but also insight into the problems of science and (...) political philosophy in the Catholic world. (shrink)
We study sincere-strategy preference-based approval voting , a system proposed by Brams and Sanver [1] and here adjusted so as to coerce admissibility of the votes , with respect to procedural control. In such control scenarios, an external agent seeks to change the outcome of an election via actions such as adding/deleting/partitioning either candidates or voters. SP-AV combines the voters' preference rankings with their approvals of candidates, where in elections with at least two candidates the voters' approval strategies are adjusted (...) – if needed – to approve of their most-preferred candidate and to disapprove of their least-preferred candidate. This rule coerces admissibility of the votes even in the presence of control actions, and hybridizes, in effect, approval with pluralitiy voting. We prove that this system is computationally resistant to 19 out of 22 types of constructive and destructive control. Thus, SP-AV has more resistances to control than is currently known for any other natural voting system with a polynomial-time winner problem. In particular, SP-AV is the second natural voting system with an easy winner-determination procedure that is known to have full resistance to constructive control, and unlike Copeland voting it in addition displays broad resistance to destructive control. (shrink)
We will study the class RSA of -dimensional representable substitution algebras. RSA is a sub-reduct of the class of representable cylindric: algebras, and it was an open problem in Andréka [1] that whether RSA can be finitely axiomatized. We will show, that the answer is positive. More concretely, we will prove, that RSA is a finitely axiomatizable quasi-variety. The generated variety is also described. We note that RSA is the algebraic counterpart of a certain proportional multimodal logic and it is (...) related to a natural fragment of first order logic, as well. (shrink)
I agree with Robbert Van den Berg that Plotinus endorses Socratic intellectualism, but I challenge his view that Plotinus rejects the phenomenon of akrasia. According to Van den Berg, the only form of akrasia acknowledged by Plotinus is a conditional, or ‘weak,’ akrasia. I provide some reasons for thinking that Plotinus might have accepted complete or ‘strong’ akrasia—full stop. While such strong forms of akrasia are usually taken to conflict with Socratic intellectualism, I argue that Plotinus’s complex, dual-self psychology allows (...) a way in which he, unique among ancient philosophers, could simultaneously endorse Socratic intellectualism and hard akrasia. (shrink)
We study sincere-strategy preference-based approval voting, a system proposed by Brams and Sanver [1] and here adjusted so as to coerce admissibility of the votes, with respect to procedural control. In such control scenarios, an external agent seeks to change the outcome of an election via actions such as adding/deleting/partitioning either candidates or voters. SP-AV combines the voters' preference rankings with their approvals of candidates, where in elections with at least two candidates the voters' approval strategies are adjusted – if (...) needed – to approve of their most-preferred candidate and to disapprove of their least-preferred candidate. This rule coerces admissibility of the votes even in the presence of control actions, and hybridizes, in effect, approval with pluralitiy voting. We prove that this system is computationally resistant to 19 out of 22 types of constructive and destructive control. Thus, SP-AV has more resistances to control than is currently known for any other natural voting system with a polynomial-time winner problem. In particular, SP-AV is the second natural voting system with an easy winner-determination procedure that is known to have full resistance to constructive control, and unlike Copeland voting it in addition displays broad resistance to destructive control. (shrink)
The notion of ultratopologies was introduced in [6] motivated by the model theory of first and higher order logics. In [6] we established some model theoretical applications of ultratopologies, for example, we provided a purely set theoretical characterization for classes de.nable by second order existential formulas. The present note deals with topological properties of ultratopologies, like density and compactness.
A partition $\{C_i\}_{i\in I}$ of a Boolean algebra Ω in a probability measure space (Ω, p) is called a Reichenbachian common cause system for the correlation between a pair A,B of events in Ω if any two elements in the partition behave like a Reichenbachian common cause and its complement; the cardinality of the index set I is called the size of the common cause system. It is shown that given any non-strict correlation in (Ω, p), and given any finite (...) natural number n > 2, the probability space (Ω,p) can be embedded into a larger probability space in such a manner that the larger space contains a Reichenbachian common cause system of size n for the correlation. (shrink)
Which ultraproducts preserve the validity of formulas of higher order logics? To answer this question, we will introduce natural topologies on ultraproducts. We will show, that ultraproducts preserving certain higher order formulas can be characterized in terms of these topologies. As an application of the above results, we provide a constructive, purely model theoretic characterization for classes definable by second order existential formulas.
Lakatos's methodology, if analysed as belonging to the demarcationist-rationalist program launched by Popper gives some interesting conclusions concerning the feasibility of the project: (1) Rationalism cannot provide arguments against relativism. (2) A theory of scientific rationality cannot be defended without relying on scientific authorities. (3) A historical justification of scientific rationality does not show that the procedures that are rational according to the theory are truth-conducive.
The paper argues for the view advocated by Yolton that Locke's ideas are best viewed as intentional contents. Drawing on Smith and McIntyre's distincition between object- and content-theories of intentionality I seek it show that it belongs to the second category. The argument relies mainly on the analysis of Locke's discussion of meaning, the reality and adequacy of ideas and real essence.
A condition is formulated in terms of the probabilities of two pairs of correlated events in a classical probability space which is necessary for the two correlations to have a single (Reichenbachian) common-cause and it is shown that there exists pairs of correlated events probabilities of which violate the necessary condition. It is concluded that different correlations do not in general have a common common-cause. It is also shown that this conclusion remains valid even if one weakens slightly Reichenbach's definition (...) of common-cause. The significance of the difference between common-causes and common common-causes is emphasized from the perspective of Reichenbach's Common Cause Principle. (shrink)
A partition $\{C_i\}_{i\in I}$ of a Boolean algebra $\cS$ in a probability measure space $(\cS,p)$ is called a Reichenbachian common cause system for the correlated pair $A,B$ of events in $\cS$ if any two elements in the partition behave like a Reichenbachian common cause and its complement, the cardinality of the index set $I$ is called the size of the common cause system. It is shown that given any correlation in $(\cS,p)$, and given any finite size $n>2$, the probability space (...) $(\cS,p)$ can be embedded into a larger probability space in such a manner that the larger space contains a Reichenbachian common cause system of size $n$ for the correlation. It also is shown that every totally ordered subset in the partially ordered set of all partitions of \cS$ contains only one Reichenbachian common cause system. Some open problems concerning Reichenbachian common cause systems are formulated. (shrink)
In the paper it will be shown that Reichenbach’s Weak Common Cause Principle is not valid in algebraic quantum field theory with locally finite degrees of freedom in general. Namely, for any pair of projections A, B supported in spacelike separated double cones ${\mathcal{O}}_{a}$ and ${\mathcal{O}}_{b}$ , respectively, a correlating state can be given for which there is no nontrivial common cause (system) located in the union of the backward light cones of ${\mathcal{O}}_{a}$ and ${\mathcal{O}}_{b}$ and commuting with the both (...) A and B. Since noncommuting common cause solutions are presented in these states the abandonment of commutativity can modulate this result: noncommutative Common Cause Principles might survive in these models. (shrink)
Bell inequalities, understood as constraints between classical conditional probabilities, can be derived from a set of assumptions representing a common causal explanation of classical correlations. A similar derivation, however, is not known for Bell inequalities in algebraic quantum field theories establishing constraints for the expectation of specific linear combinations of projections in a quantum state. In the paper we address the question as to whether a ‘common causal justification’ of these non-classical Bell inequalities is possible. We will show that although (...) the classical notion of common causal explanation can readily be generalized for the non-classical case, the Bell inequalities used in quantum theories cannot be derived from these non-classical common causes. Just the opposite is true: for a set of correlations there can be given a non-classical common causal explanation even if they violate the Bell inequalities. This shows that the range of common causal explanations in the non-classical case is wider than that restricted by the Bell inequalities. (shrink)
Standard derivations of the Bell inequalities assume a common common cause system that is a common screener-off for all correlations and some additional assumptions concerning locality and no-conspiracy. In a recent paper (Grasshoff et al., 2005) Bell inequalities have been derived via separate common causes assuming perfect correlations between the events. In the paper it will be shown that the assumptions of this separate-common-cause-type derivation of the Bell inequalities in the case of perfect correlations can be reduced to the assumptions (...) of common-common-cause-system-type derivation. However, in the case of non-perfect correlations a non-reducible separate-common-cause-type derivation of some Bell-like inequalities can be given. The violation of these Bell-like inequalities proves Szabó's (2000) conjecture concerning the non-existence of a local, non-conspiratorial, separate-common-cause-model for a delta δ-neighborhood of perfect EPR correlations. (shrink)