The paradigm of Laplacean determinism combines three regulative principles: determinism, predictability, and the explanatory adequacy of universal laws together with purely local conditions. Historically, it applied to celestial mechanics, but it has been expanded into an ideal for scientific theories whose cogency is often not questioned. Laplace’s demon is an idealization of mechanistic scientific method. Its principles together imply reducibility, and rule out holism and emergence. I will argue that Laplacean determinism fails even in the realm of planetary dynamics, (...) and that it does not give suitable criteria for explanatory success except within very well defined and rather exceptional domains. Ironically, the very successes of Laplacean method in the Solar System were made possible only by processes that are not themselves tractable to Laplacean methodology. The results of some of these processes were first observed in 1964, and violate the Lapacean requirements of locality and predictability, opening the door to holism and nonreducibility, i.e., emergence. Despite the falsification of Laplacean methodology, the explanatory resources of holism and emergence remain in scientific limbo, though emergence has been used somewhat indiscriminately in recent scientific literature. I make some remarks at the end about the proper use of emergence in its traditional sense going back to C.D. Broad. (shrink)
This paper includes a concise survey of the work done in compliance with de Finetti's reconstruction of the Bayes-Laplace paradigm. Section 1 explains that paradigm and Section 2 deals with de Finetti's criticism. Section 3 quotes some recent results connected with de Finetti's program and Section 4 provides an illustrative example.
A famous Newtonian argument by Michell and Laplace, regarding the existence of “dark bodies” and dating back to the end of the 18th century, is able to provide an exact general-relativistic result, namely the exact formula for the Schwarzschild radius. Since general relativity was formulated more than a century after this argument had been issued, it looks quite surprising that such a correct prediction could have been possible. Far from being merely a fortuitous coincidence (as one might justifiably be (...) induced to think), this fact can find a reasonable explanation once the question is approached the other way round, i.e. from the general-relativistic point of view. By reexamining Laplace’s proof from this point of view, we discuss here the reasons why Michell-Laplace argument can be so “unexpectedly" correct in its general-relativistic prediction. (shrink)
Between about 1790 and 1850 French mathematicians dominated not only mathematics, but also all other sciences. The belief that a particular physical phenomenon has to correspond to a single differential equation originates from the enormous influence Laplace and his contemporary compatriots had in all European learned circles. It will be shown that at the beginning of the nineteenth century Newton's ?fluxionary calculus? finally gave way to a French-type notation of handling differential equations. A heated dispute in the Philosophical Magazine (...) between Challis, Airy and Stokes, all three of them famous Cambridge professors of mathematics, then serves to illustrate the era of differential equations. A remark about Schrödinger and his equation for the hydrogen atom finally will lead back to present times. (shrink)
The sensitive dependence on initial conditions (SDIC) associated with nonlinear models imposes limitations on the models’ predictive power. We draw attention to an additional limitation than has been under-appreciated, namely structural model error (SME). A model has SME if the model-dynamics differ from the dynamics in the target system. If a nonlinear model has only the slightest SME, then its ability to generate decision-relevant predictions is compromised. Given a perfect model, we can take the effects of SDIC into account by (...) substituting probabilistic predictions for point predictions. This route is foreclosed in the case of SME. (shrink)
The paper deals with the problem of the estimation of an unknown probability from a finite number of experiments. We propose a normative (axiomatic) solution that restricts the class of admissible estimators to a one-parameter family. Moreover this solution coincides with the one obtained from Bayes theory with a β prior. Thus our results can be interpreted as a justification for the use of Bayesian inference with a β prior.
Imagine you live in 1823 and you are about to design an advanced course on the theory of heat. About fifty years ago, Lavoisier and Laplace had posited caloric as a material substance—an indestructible fluid of fine particles—which was taken to be the cause of heat and in particular, the cause of the rise of temperature of a body, by being absorbed by the body. No doubt, you rely on the best available theory, which is the caloric theory. In (...) particular, meticulous and knowledgeable as you are, you rely on the best of the best: Laplace’s advanced account of the caloric theory of heat, with all its sophistication, detail and predictive might. You really believe that the best science teaching should be based on the best theories that are available. But you also believe that the best theory that is available is not really the best unless it has a claim to truth (or truthlikeness, or partial truth and the like). For what is the point of teaching a theory about the deep structure of the world unless it does say something or other about this deep structure? The course goes really well. Your notes are impressive. They are soon turned into a textbook with lots of explanatory detail and fancy calculations. Alas! The world does not co-operate. There are no calorific particles among the things there are in it. Heat is destroyed when work is produced. The advanced theory is challenged by alternative theories, anomalies and failed predictions. There is agony, but in your lifetime, the caloric theory gets superseded and is left discredited in the wasteland of false theories. Decades come by. You are not around anymore. Your grandchildren go to school and.. (shrink)
A BAYESIAN ARTICULATION OF HUME’S VIEWS IS OFFERED BASED ON A FORM OF THE BAYES-LAPLACE THEOREM THAT IS SUPERFICIALLY LIKE A FORMULA OF CONDORCET’S. INFINITESIMAL PROBABILITIES ARE EMPLOYED FOR MIRACLES AGAINST WHICH THERE ARE ’PROOFS’ THAT ARE NOT OPPOSED BY ’PROOFS’. OBJECTIONS MADE BY RICHARD PRICE ARE DEALT WITH, AND RECENT EXPERIMENTS CONDUCTED BY AMOS TVERSKY AND DANIEL KAHNEMAN ARE CONSIDERED IN WHICH PERSONS TEND TO DISCOUNT PRIOR IMPROBABILITIES WHEN ASSESSING REPORTS OF WITNESSES.
The law-governed world-picture -- A remarkable idea about the way the universe is cosmos and compulsion -- The laws as the cosmic order : the best-system approach -- The three ways : no-laws, non-governing-laws, governing-laws -- Work that laws do in science -- An important difference between the laws of nature and the cosmic order -- The picture in four theses -- The strategy of this book -- The meta-theoretic conception of laws -- The measurability approach to laws -- What (...) comes where -- In defense of some received views -- Some assumptions that will be in play -- The laws are propositions -- The laws are true -- The logically contingent consequences of the laws are laws themselves -- At least some laws are metaphysically contingent -- The meta-theoretic conception of laws -- Laws of nature, laws of science, laws of theories -- The first-order conception versus the meta-theoretic conception -- What is a law of nature? -- Some examples of meta-theoretic accounts -- The virtues of the meta-theoretic conception -- Weighing the virtues and shortcomings of the meta-theoretic conception -- An epistemological argument for the meta-theoretic conception of laws -- The discoverability thesis, the governing thesis, and the first-order conception -- The main argument -- The objection from bad company -- The objection from inference to the best explanation -- The objection from bayesianism -- The objection from contextualist epistemology -- The objection from the threat of inductive skepticism -- Laws, governing, and counterfactuals -- Where we are now -- What would things have to be like in order for the laws of nature to govern the universe? -- Lawhood, inevitability, counterfactuals -- What is it for a proposition to be inevitably true? -- What is it for a whole class of propositions to be inevitably true? -- What is it for lawhood to confer inevitability? -- NP and supporting counterfactuals -- The worry about context-variability -- A solution and a look ahead -- When would the laws have been different? -- Where we are now -- The God cases -- Other counterexamples to NP -- A moral-theoretic counterexample to NP -- Scientific contexts and non-scientific contexts -- Scientific God cases? -- Lewisian non-backtracking counterexamples -- Where things stand now -- How could science show that the laws govern? -- Why the law-governed world-picture must include the science-says-so thesis -- What is extra-scientific? -- How can the science-says-so thesis be true? -- NP as a consequence of the presuppositions in any scientific context -- Np as true in all possible scientific contexts -- But how could it be so? -- Attack of the actual-factualists -- Measurement and counterfactuals -- Where we are now -- Measurements, reliability, counterfactuals -- A general principle that captures the relation between measurement and counterfactuals -- What we can learn about lawhood from what we have learned about the counterfactual commitments of science -- A first-order account of laws or a meta-theoretic account of laws? -- What methods are presupposed to be legitimate measurement procedures? -- Why we must adopt a meta-theoretic account of laws -- What lawhood is -- Where we are now -- The measurability account of laws -- Brief review of the case for the mal -- A note about hedged laws -- How plausible is the mal? -- What if we don't care about the law-governed world-picture? -- Newton's God and Laplace's demon -- Beyond humean and non-humean -- Two views of laws -- Humean supervenience and the meta-theoretic conception -- Alleged counterexamples to humean supervenience -- Governing and non-trivial necessity -- How the mal lets us have it all -- Humeanism? non-humeanism? -- What is the significance of the idea of the law-governed universe? -- Where in the world are the laws of nature? -- Appendix: The mal in action : a few examples -- Of scientific theories and their laws -- Newton's theory as a paradigm example -- Classical special-force laws -- Geometrical optics and one of its laws -- Local deterministic field theories. (shrink)
In 1956, when I was a callow sixteen-year-old sophomore early entrant to the University of Chicago, I read my first twentieth century philosophical book, A. J. Ayerâ€™s Language, Truth, and Logic. While I had already gorged on the Russian novelists, read through the then obligatory Hemingway and Faulkner, consumed Freud and a raft of popular sociologists, and managed to get myself expelled from my tenth grade social science class for issuing disparaging quotes from Marx and Schopenhauer, I was only then (...) being introduced to classical philosophical and scientific texts through the marvelous and soon-to-be-by-stages-dismantled Robert Hutchinsâ€™ three year great books curriculum, in which the Natural Sciences sequence began with Aristotleâ€™s Physics, Bk. II, continued with Galileoâ€™s Dialogue, selections from Newtonâ€™s Principia, and on to papers by Laplace, Mach, Jeans and Einstein. Mathematics ABC was a simplified version of whole stretches of Principia Mathematica, the content of Russellâ€™s great work having become common collegial culture for logicians and mathematicians. Â Â Â Â Â Â Â Â Â Â Â I soon read some of the less technical works of Russell, whom Ayer cast as Hamlet to his own humble Horatio, and of David Hume, whose skeptical contentions Ayer claimed merely to update and cast into a linguistic vein. With the further help of Hume and Russell, I emended Rene Descartesâ€™s insufficiently skeptical â€œI think, therefore I amâ€ to the minimalist â€œThere are experiencesâ€. I wryly chuckled in agreement with Russellâ€™s saucy contention that the only materialists in the world were Russian commissars and American behavioral scientists. Common sense realism about physical objects leads to science, which inevitably refutes naÃ¯ve realism. Â Â Â Â Â Â Â Â Â Â Â Disaster and apostasy loomed in my first concerted encounter, at the graduate course level, with 20th century Anglo-American philosophy.. (shrink)
What thesis is Hume trying to establish in his essay “On Miracles” (Section 10 of the Enquiry Concerning Human Understanding) and does he succeed? John Earman’s answer to the latter question is clearly conveyed by the title of his new book. Earman uses a Bayesian representation of the problem to make his case. For Earman, this mode of analysis is both perspicuous and nonanachronistic, in that probability reasoning was central to the 18th century debate about miracles in particular and testimony (...) in general. Indeed, one of Hume’s most interesting antagonists, Richard Price, was the person to whom Thomas Bayes entrusted his now-famous essay for posthumous publication. For Earman, Price is the proper Bayesian, while Hume’s essay provides only “rhetoric and schein geld” (p. 73). Earman’s tone is consistently prosecutorial and sometimes snide; he says that his animus is not so much against Hume himself as against those who smugly invoke Hume’s essay as definitively settling the <span class='Hi'>matter</span>. This tone should not deter potential readers who are convinced that Hume’s essay contains something of value. Earman’s book is interesting and provocative in multiple ways—it places Hume’s essay in its historical setting, it offers an insightful close reading of the text, and it shows how the resources of Bayesianism can be powerfully put to work. Besides Earman’s own essay (94 pages long), the volume also contains Hume’s essay and relevant work by others, including Locke, Spinoza, Samuel Clarke, Price, Laplace, and Babbage. The book would be an excellent choice for an advanced undergraduate or graduate seminar. (shrink)
Gott ( 1993 ) has used the ‘Copernican principle’ to derive a probability distribution for the total longevity of any phenomenon, based solely on the phenomenon’s past longevity. Leslie ( 1996 ) and others have used an apparently similar probabilistic argument, the ‘Doomsday Argument’, to claim that conventional predictions of longevity must be adjusted, based on Bayes’s Theorem, in favor of shorter longevities. Here I show that Gott’s arguments are flawed and contradictory, but that one of his conclusions is plausible (...) and mathematically equivalent to Laplace’s famous—and notorious—‘rule of succession’. On the other hand, the Doomsday Argument, though it appears consistent with some common‐sense grains of truth, is fallacious; the argument’s key error is to conflate future longevity and total longevity. Applying the work of Hill ( 1968 ) and Coolen ( 1998 , 2006 ) in the field of nonparametric predictive inference, I propose an alternative argument for quantifying how past longevity of a phenomenon does provide evidence for future longevity. In so doing, I identify an objective standard by which to choose among counting time intervals, counting population, or counting any other measure of past longevity in predicting future longevity. *Received May 2007; revised October 2008. †To contact the author, please e‐mail: email@example.com. (shrink)
The physiologist and neo-Kantian philosopher Johannes von Kries (1853-1928) wrote one of the most philosophically important works on the foundation of probability after P.S. Laplace and before the First World War, his Principien der Wohrscheinlich-keitsrechnung (1886, repr. 1927). In this book, von Kries developed a highly original interpretation of probability, which maintains it to be both logical and objectively physical. After presenting his approach I shall pursue the influence it had on Ludwig Wittgenstein and Friedrich Waismann. It seems that (...) von Kries's approach had more potential than recognized in his time and that putting Waismann's and Wittgenstein's early work in a von Kries perspective is able to shed light on the notion of an elementary proposition. (shrink)
A major difficulty for currently existing theories of inductive inference involves the question of what to do when novel, unknown, or previously unsuspected phenomena occur. In this paper one particular instance of this difficulty is considered, the so-called sampling of species problem.The classical probabilistic theories of inductive inference due to Laplace, Johnson, de Finetti, and Carnap adopt a model of simple enumerative induction in which there are a prespecified number of types or species which may be observed. But, (...) realistically, this is often not the case. In 1838 the English mathematician Augustus De Morgan proposed a modification of the Laplacian model to accommodate situations where the possible types or species to be observed are not assumed to be known in advance; but he did not advance a justification for his solution. (shrink)
Theoretical determinism, as it is usually ascribed to Laplace, is neither verifiable nor falsifiable and has therefore no real content. It is not the same as predictability of actually observable phenomena. On the other hand, predictability is not an abstract principle; rather it is true to a certain degree, depending on the phenomena considered. It can be discussed only by examining the scientific state of affairs. This is done in some detail for classical statistical mechanics. Much of a recently (...) published debate on determinism (Amsterdamski et al. 1990) is thereby obviated. (shrink)
This paper argues that probability is not an objective phenomenon that can be identified with either the configurational properties of sequences, or the dynamic properties of sources that generate sequences. Instead, it is proposed that probability is a function of subjective as well as objective conditions. This is explained by formulating a nation of probability that is a modification of Laplace‘s classical enunciation. This definition is then used to explain why probability is strongly associated with disordered sequences, and is (...) also used to throw light on a number of problems in probability theory. (shrink)
ABSTRACT. The objectivity of physics has been called into question by social theorists, Kuhnian relativists, and by anomalous aspects of quantum mechanics. Here we focus on one neglected background issue, the categorical structure of the language of classical physics. The first half is an historical overview of the formation of the language of classical physics (LCP), beginning with Aristotle's Categories and the novel idea of the quantity of a quality introduced by medieval Aristotelians. Descartes and Newton at-tempted to put the (...) new mechanics on an ontological foundation of atomism. Euler was the pivotal figure in basing mechanics on a macroscopic concept of matter. The second scientific revolution, led by Laplace, took mechanics as foundational and attempted to fit the Baconian sciences into a framework of atomistic mechanism. This protracted effort had the unintended effect of supplying an informal unification of physics in a mixture of ordinary language and mechanistic terms. The second half treats LCP as a linguistic para-site that can attach itself to any language and effect mutations in the host without chang-ing its essential form. This puts LCP in the context of a language of discourse and sug-gests that philosophers should concentrate more on the dialog between experimenters and theoreticians and less on analyses of theories. This orientation supplies a basis for treating objectivity. (shrink)
J. Richard Gott III (1993) has used the “Copernican principle” to derive a probability density function for the total longevity of any phenomenon, based solely on the phenomenon’s past longevity. John Leslie (1996) and others have used an apparently similar probabilistic argument, the “Doomsday Argument,” to claim that conventional predictions of longevity must be adjusted, based on Bayes’ Theorem, in favor of shorter longevities. Here I show that Gott’s arguments are flawed and contradictory, but that one of his conclusions—his delta (...) t formula—is mathematically equivalent to Laplace’s famous (and notorious) ‘rule of succession’; moreover, Gott’s delta t formula is a plausible worst-case (if one favors greater longevity) bound in some contexts. On the other hand, the Doomsday Argument is fallacious: the argument’s Bayesian formalism is stated in terms of total duration, but all attempted real-life applications of the argument—with one exception, an application by Gott 1994—actually plug in prior probabilities for future duration; moreover, the Self-Sampling Assumption, an essential premise of the Doomsday Argument, is contradicted by the prior information in all known real-life cases. But rejecting the Doomsday Argument does not entail rejecting the possibility of learning about the future from the past. Applying the work of Bruce M. Hill (1968, 1988, 1993) and Frank P.A. Coolen (1998, 2006) in the field of non-parametric predictive inference, I propose and defend an alternative methodology for quantifying how past longevity of any phenomenon does provide evidence for future longevity. In so doing, I identify an objective standard by which to choose among counting time intervals, counting population, or counting any other measure of past longevity in predicting future longevity. This methodology forms the basis of a calculus of induction. (shrink)
This article focuses on the role of statistical concepts in both experiment and theory in various scientific disciplines, especially physics, including astronomy, and psychology. In Sect. 1 the concept of uncertainty in astronomy is analyzed from Ptolemy to Laplace and Gauss. In Sect. 2 theoretical uses of probability and statistics in science are surveyed. Attention is focused on the historically important example of radioactive decay. In Sect. 3 the use of statistics in biology and the social sciences is examined, (...) with detailed consideration of various Chi-square statistical tests. Such tests are essential for proper evaluation of many different kinds of scientific hypotheses. (shrink)
A loose analogy relates the work of Laplace and Hilbert. These thinkers had roughly similar objectives. At a time when so much of our analytic effort goes to distinguishing mathematics and logic from physical theory, such an analogy can still be instructive, even though differences will always divide endeavors such as those of Laplace and Hilbert.
In October 2009 I decided to stop doing philosophy. This meant, in particular, stopping work on the book that I was writing on the nature of probability. At that time, I had no intention of making my unﬁnished draft available to others. However, I recently noticed how many people are reading the lecture notes and articles on my web site. Since this draft book contains some important improvements on those materials, I decided to make it available to anyone who wants (...) to read it. That is what you have in front of you. The account of Laplace’s theory of probability in Chapter 4 is very diﬀerent to what I said in my seminar lectures, and also very diﬀerent to any other account I have seen; it is based on a reading of important texts by Laplace that appear not to have been read by other commentators. The discussion of von Mises’ theory in Chapter 7 is also new, though perhaps less revolutionary. And the ﬁnal chapter is a new attempt to come to grips with the popular, but amorphous, subjective theory of probability. The material in the other chapters has mostly appeared in previous articles of mine but things are sometimes expressed diﬀerently here. I would like to say again that this is an incomplete draft of a book, not the book I would have written if I had decided to ﬁnish it. It no doubt contains poor expressions, it may contain some errors or inconsistencies, and it doesn’t cover all the theories that I originally intended to discuss. Apart from this preface, I have done no work on the book since October 2009. (shrink)
The probabilistic corroboration of two or more hypotheses or series of observations may be performed additively or multiplicatively . For additive corroboration (e.g. by Laplace's rule of succession), stochastic independence is needed. Inferences, based on overwhelming numbers of observations without unexplained counterinstances permit hyperinduction , whereby extremely high probabilities, bordering on certainty for all practical purposes may be achieved. For multiplicative corroboration, the error probabilities (1 - Pr) of two (or more) hypotheses are multiplied. The probabilities, obtained by reconverting (...) the product, are valid for both of the hypotheses and indicate the gain by corroboration.. This method is mathematically correct, no probabilities > 1 can result (as in some conventional methods) and high probabilities with fewer observations may be obtained, however, semantical independence is a prerequisite. The combined method consists of (1) the additive computation of the error probabilities (1 - Pr) of two or more single hypotheses, whereby arbitrariness is avoided or at least reduced and (2) the multiplicative procedure . The high reliability of Empirical Counterfactual Statements is explained by the possibility of multiplicative corroboration of "all-no" statements due to their strict semantical independence. (shrink)
Machine generated contents note: 1. Transporter Troubles. -- 2. Zeno's Hand to Mouth Paradox. -- 3. If a Pint Spills in the Forest. -- 4. The Beer Goggles Paradox. -- 5. Pascal's Wager. -- 6. The Experience Machine. -- 7. Lucretius' Spear. -- 8. The Omnipotence Dilemma. -- 9. What Mary Didn't Know About Lager. -- 10. Malcolm X and the Whites Only Bar. -- 11. Untangling Taste. -- 12. The Foreknowledge Paradox. -- 13. The Buddha's Missing Self. -- 14. (...) The Blind Men and the Black and Tan. -- 15. Liar's Paradox. -- 16. Paley's Cask. -- 17. Chuang Tzu's Butterfly. -- 18. Descartes' Doubt. -- 19. God's Command. -- 20. Mill's Drunkard. -- 21. The Myth of Gyges. -- 22. Laplace's Superscientist. -- 23. Gaunilo's Perfect Ale. -- 24. The Problem of Moral Truth. -- 25. How to Sew on a Soul. -- 26. Plato's Forms. -- 27. Realizing Nirvana. -- 28. The Problem of Evil. -- 29. Time's Conundrum. -- 30. Time Travel Paradoxes. -- 31. Hitler's Lager. -- 32. The Zen Koan. -- 33. Sex and Sensibility. -- 34. Socrates' Virtue. -- 35. Nature Calls. -- 36. Nietzsche's Eternal Recurrence. -- 37. The Most Interesting Man and the Firing Line. -- 38. Turing's Tasting Machine. -- 39. Singer's Pond. -- 40. The Wisest One of All. -- 41. Enter the Matrix. -- 42. A Case of Bad Faith. -- 43. Cask and Cleaver. -- 44. Flirting with Disaster. -- 45. Fear of Zombies. -- 46. Lao Tzu's Empty Mug. -- 47. Beer and the Meaning of Life. -- 48. The Case for Temperance. (shrink)
My aim in this paper is to give a philosophical analysis of the relationship between contingently available technology and the knowledge that it makes possible. My concern is with what specific subjects can know in practice, given their particular conditions, especially available technology, rather than what can be known “in principle” by a hypothetical entity like Laplace’s Demon. The argument has two parts. In the first, I’ll construct a novel account of epistemic possibility that incorporates two pragmatic conditions: responsibility (...) and practicability. For example, whether subjects can gain knowledge depends in some circumstances on whether they have the capability of gathering relevant evidence. In turn, the possibility of undertaking such investigative activities depends in part on factors like ethical constraints, economical realities, and available technology. In the second part of the paper, I’ll introduce “technological possibility” to analyze the set of actions made possible by available technology. To help motivate the problem and later test my proposal, I’ll focus on a specific historical case, one of the earliest uses of digital electronic computers in a scientific investigation. I conclude that the epistemic possibility of gaining access to scientific knowledge about certain subjects depends (in some cases) on the technological possibility for making responsible investigations. (shrink)
This paper argues that, contrary to most interpretations, e.g., those of Reid, Popkin and Passmore, Hume is not a sceptic with regard to reason. The argument of Treatise I, IV. i, of course, has a sceptical conclusion with regard to reason, and a somewhat similar point is made by Cleanthes in the Dialogues. This paper argues that the argument of Treatise I, IV. i is parallel to similar arguments in Bentham and Laplace. The latter are, as far as they (...) go, sound, and so is Hume’s. But the limitations of all mean that they cannot sustain a general argument against reason. Hume the historian is quite aware of these limitations. So is Hume the philosopher. A careful examination of the other references in the Treatise to the argument of I, IV. i reveals that Hume not only rejects but constructs a sound case against accepting the sceptical conclusion, arguing that reason can indeed show the sceptic’s argument to be unreasonable. A close reading of the Dialogues shows that Hume there also draws the same conclusion. The thrust of the paper is to go some way towards showing that it is a myth that Hume is a pyrrhonian sceptic. (shrink)
Support functions $s(h,e)=p(h\backslash e)-p(h)$ are widely used in discussion of explanation, causality and, recently, in connection with the possibility or otherwise of probabilistic induction. With this latter application in view, a rather complete analysis of the variety of support functions, their interrelationships and their "non-deductive" and "inductive" components is presented. With the restriction to two propositions, three variable probabilities are enough to discuss such problems. The analysis is illustrated by graphs, a Venn diagram and by using the Laplace Rule (...) of Succession as an illustrative example. It is concluded that within this framework one cannot prove or disprove the possibility of probabilistic induction. (shrink)
In this unique monograph, based on years of extensive work, Chatterjee presents the historical evolution of statistical thought from the perspective of various approaches to statistical induction. Developments in statistical concepts and theories are discussed alongside philosophical ideas on the ways we learn from experience. -/- Suitable for researchers, lecturers and students in statistics and the history of science this book is aimed at those who have had some exposure to statistical theory. It is also useful to logicians and philosophers (...) due to the discussion of the problem of statistical induction in a wider philosophical context and the impact of developments of statistics on current thinking -/- The book is divided into two parts: -/- Part I (Chapters 1-4) entitled 'Perspective' deals with foundations and structure and Part II (Chapters 5-10), explores the 'History'. In Chapter 1 statistics is characterized as 'prolongation of induction', and its philosophical background is briefly reviewed. The special features of statistical induction, the two roles (as input and output) the theory of probability plays in it, and the different interpretations of probability are discussed in the next two chapters. Chapter 4 distinguishes broadly between four different approaches to statistical induction (behavioural, instantial, pro-subjective Bayesian, and purely subjective) that have been developed by taking different interpretations of probability as input and output, and considers their comparative characteristics, advantages and disadvantages . Part II traces the historical evolution of statistical thought in the perspective of the framework described in Part I and specifically considers the origin and development of the different concepts of probability and their application to the formulation of the different approaches to statistical induction. After some reference to the prehistory of the subject, the contributions made by the principal contributors in probability and statistics in the 17th-20th centuries are outlined (beginning with Cardano, Pascal, Fermat, Huygens and James Bernoulli and proceeding through Laplace and Gauss to Karl Pearson, Fisher, Neyman, E.S.Pearson,Wald, and their successors). Throughout, the emphasis is on concepts - factual details and technicalities are introduced only if they are unavoidable. (shrink)
The elliptical orbits resulting from Newtonian gravitation are generated with a multifaceted symmetry, mainly resulting from their conservation of both angular momentum and a vector fixing their orientation in space—the Laplace or Runge-Lenz vector. From the ancient formalisms of celestial mechanics, I show a rather counterintuitive behavior of the classical hydrogen atom, whose orbits respond in a direction perpendicular to a weak externally-applied electric field. I then show how the same results can be obtained more easily and directly from (...) the intrinsic symmetry of the Kepler problem. If the atom is subjected to an oscillating electric field, it enjoys symmetry in the time domain as well, which is manifest by quasi-energy states defined only modulo ħω. Using the Runge-Lenz vector in place of the radius vector leads to an exactly-solvable model Hamiltonian for an atom in an oscillating electric field—embodying one of the few meaningful exact solutions in quantum mechanics, and a member of an even more exclusive set of exact solutions having a time-dependent Hamiltonian. I further show that, as long as the atom suffers no change in principal quantum number, incident radiation will produce harmonic radiation with polarization perpendicular to the incident radiation. This unusual polarization results from the perpendicular response of the wavefunction, and is distinguished from most usual harmonic radiation resulting from a scalar nonlinear susceptibility. Finally, I speculate on how this radiation might be observed. (shrink)
This contribution addresses major distinctions between the notions of determinism, causation, and prediction, as they are typically used in the sciences. Formally, this can be elegantly achieved by two ingredients: (i) the distinction of ontic and epistemic states of a system, and (ii) temporal symmetry breakings based on the mathematical concept of the affine time group. Key aspects of the theory of deterministically chaotic systems together with historical quotations from Laplace, Maxwell, and Poincare provide significant illustrations. An important point (...) of various discussions in consciousness studies (notably about 'mental causation' and 'free agency'), the alleged 'causal closure of the physical', will be analysed on the basis of the affine time group and the breakdown of its symmetries. (shrink)
The Wheeler-Feynman (WF) relativistic theory of interacting point particles, generalized by acceptance of an arbitrary spacelike interaction, is shown to possess a privileged status, reminiscent of the “central force” interactions occurring in Newtonian mechanics. This scheme is shown to be isomorphic to the classical one of the statics of interacting flexible current-carrying wires obeying the Ampère-Laplace (AL) formulas: to the tensionT (T 2 =const) of the wire corresponds the momentum-energy pi (pipi=−c2m2) of the particle; to the Laplace linear (...) force density −iH×dr corresponds the Lorentz force QHij drj; to the Laplace potential ir−1 dr corresponds the WF potential Qδ(r2) dri, etc. Among the differences, there is self-action in the AL scheme and no self-action in the WF scheme. A stationary energy principle in the AL scheme is isomorphic to Fokker's stationary action principle in the WF scheme. (shrink)
It is suggested that the world is locally projectively flat rather than Euclidean. From this postulate it is shown that an (N+1)-particle system has the global geometry of the symmetric spaceSO(4,N+1)/SO(4)×SO(N+1). A complex representation also exists, with structureSU(2,N+1)/S[U(2)×U(N+1)]. Several aspects of these geometrics are developed. Physical states are taken to be eigenfunctions of the Laplace-Beltrami operators. The theory may provide a rational basis for comprehending the groupsSO(4, 2),SU(2)×U(1),SU(3), etc., of current interest.
Le rôle de Haüy comme 'grand législateur de la minéralogie', pour reprendre le jugement de Cuvier, a laissé dans l'ombre son activité dans le domaine de l'électricité. Il est vrai que le phénomène découvert par Haüy — l'électricité de pression — a perdu son intérêt pour les physiciens à la fin du XIXe siècle. C'est l'analyse de l'évolution des attitudes de Haüy envers l'électricité qui présente pour nous de l'intérêt en ce qu'elle permet de mieux comprendre la puissance, et l'ambiguïté, (...) du mouvement collectif qui bouleverse la physique française de la fin du xviiie siècle, mouvement dont il fut un acteur majeur . Dans les années 1770, ce religieux se comporte comme tous les amateurs d'électricité en démontrant publiquement ses propriétés spectaculaires . Lorsqu'il se consacre à fonder la cristallographie sur la géométrie, il n'oublie pas les amateurs de minéraux ; il leur propose une classification simplifiée des cristaux suivant leurs propriétés électriques, propriétés à établir avec des instruments électriques qualitatifs. Mais lorsque Coulomb fait connaître sa loi fondamentale de l'électricité, appuyée sur une expérience hautement technique et de grande sensibilité, Haüy se convertit à cette nouvelle science de l'électricité. Bien que non mathématicien et étranger à ce type d'expérience — c'est là que réside l'ambiguïté — Haüy se fait l'ardent diffuseur de cette électricité radicalement nouvelle qui rejette les amateurs hors du champ de la science. Devenu membre reconnu de la communauté physico-mathématique parisienne, Haüy s'est parfaitement intégré dans la démarche mise en avant par ses confrères de l'Académie, Lavoisier, Laplace ou Coulomb, et a définitivement quitté le royaume des amateurs de cristaux ou d'étincelles. (shrink)
Being formalized inside the S-matrix scheme, the zigzagging causility model of EPR correlations has full Lorentz and CPT invariance. EPR correlations, proper or reversed, and Wheeler's smoky dragon metaphor are respectively pictured in spacetime or in the momentum-energy space, as V-shaped, A-shaped, or C-shaped ABC zigzags, with a summation at B over virtual states |B〉 〈B|. An exact “correspondence” exists between the Born-Jordan-Dirac “wavelike” algebra of transition amplitudes and the 1774 Laplace algebra of conditional probabilities, where the intermediate summations (...) |B) (B| were over “real hidden states.” While the latter used conditional (or transition) probabilities (A|C) = (C|A), the former uses transition (or conditional) amplitudes 〈A|C〉 = 〈C|A〉*. The formal parrallelism breaks down at the level of interpretation because (A|C) = |〈A|C〉|2. CPT invariance implies the Fock and Watanabe principle that, in quantum mechanics, retarded (advanced) waves are used for prediction (retrodiction), an expression of which is 〈Ψ| U |Φ〉 = 〈Ψ| UΦ〉 = 〈ΦU|Φ〉, with |Φ〉 denoting a preparation, |Ψ〉 a measurement, and U the evolution operator. The transformation |Ψ〉 = |UΦ〉 or |Φ〉 = |U−1Ψ〉 exchanges the “preparation representation” and the “measurement representation” of a system and is ancillary in the formalization of the quantum chance game by the “wavelike algebra” of conditional amplitude. In 1935 EPR overlooked that a conditional amplitude 〈A|C〉 = Σ 〈A|B〉〈B|C〉 between the two distant measurements is at stake, and that only measurements actually performed do make sense. The reversibility 〈A|C〉 = 〈C|A〉* implies that causality is CPT-invariant, or arrowless, at the microlevel. Arrowed causality is a macroscopic emergence, corollary to wave retardation and probability increase. Factlike irreversibility states repression, not suppression, of “blind statistical retrodiction”—that is, of “final cause.”. (shrink)
The generalized self-consistent method is developed to deal with porous materials at high temperature, accounting for thermal radiation. An exact closed form formula of the local effective thermal conductivity is obtained by solving Laplace's equation, and a good approximate formula with uncoupled conductive and radiative effects is given. A comparison with available experimental data and theoretical predictions demonstrates the accuracy and efficiency of the present formula. Numerical examples provide a better understanding of interesting interaction phenomena of pores in heat (...) transfer. It is found that the local effective thermal conductivity divides into two parts. One, attributed to conduction, is independent of pore radius for a fixed porosity and, furthermore, is independent of temperature (actually, it is approximately independent of the temperature) if it is non-dimensionalized by the thermal conductivity of the matrix. The other is due to thermal radiation in pores and strongly depends on the temperature and pore radius. The radiation effect can not be neglected at high temperature and in the case of relatively large pores. (shrink)
Why was Hensen unsuccesful in the quantification of ecological sampling? No aspect of plankton research itself seems to have hindered quantification; both collecting methods and taxonomy were sufficiently advanced. The reason is probably that at the time he began sampling, Hensen had to devise his own statistical methods for expressing the reproducibility and validity of samples. Hensen might have succeeded in this if he had overcome prevalent nineteenth-century attitudes toward randomness.The statistical literature of medicine and physics with which Hensen was (...) probably familiar gave methods for expressing reproducibility and for comparing differences between means of different sets of observations. For example, a student of Poisson writing on medical statistics advocated using Poisson's limit (standard error 2√2) to test the difference between two means56. Other authors suggested that differences between means were most meaningful if very large numbers of observations were used.57 In his laboratory subsampling, Hensen used the propable error as a limit about means. In this and other ways, he seems most indebted to the physicist Ernst Abbe for statistical methods.58 However, all the methodology available to Hensen had been developed for situations in which errors are a property of the measurement or sampling process, and not of the phenomena themselves. The available methods for measuring reproducibility were based on the assumption that differences from the average were small and that they tended to accumulate about the mean in a bell-shaped pattern. Hensen constantly reinvestigated the distribution of plankton numbers about the average using a different method each time. Westergaard points out that medical statisticians did not make such investigations with their biological data.59To a considerable extent, biological sampling problems forced development of theory because samples afforded the only information on a pattern in water or soil which could not be directly observed. The sampling methods of Laplace and the late nineteenth-century government statisticians contrasted strongly with Hensen's because, either through subjective knowledge of the population sampled or through censuses, they attempted to choose representative or typical samples.60 The high reproducibility and validity of representative sampling is attained by knowing more about a population than a biologist can ordinarily know. The uncertain reproducibility and validity of biological sampling spurred the development of formal sampling theory.A formal sampling theory developed only after change in the general intellectual attitude toward randomness, which was reflected in nineteenth-century statistics.61 The ninteenth-century attitude that randomness is not part of nature changed in the twentieth century to a view of randomness as a property of nature.62 The physicist's incorporation of randomness into physical models in response to this intellectual change late in the nineteenth-century is discussed by Bork.63 In biology, the change was initiated by the attention Darwin focused on morphological variation. The English biometricians — Francis Galton and W. F. R. Weldon, for example — were prominent in developing methods for the analysis of biological variation.64 Most pertinent to the development of sampling theory was Karl Pearson's use of frequency distributions as models of biological variation. In ecology, quantification was brought about by Ronald Fisher more than by anyone else; he incorporated randomness into sampling plans and built upon the methods developed earlier for analysis of individual variation. Fisher's use of random sampling allowed comparison between the sample collections and the collections expected from a model population of known patterning (calculated with a frequency distribution). This is a much more efficient method of determining the validity of a sample than Hensen's comparison of collections with a model uniform collection. Intellectual background and accumulated biological information caused Fisher to find variability where Hensen had seen uniformity.In summary, Victor Hensen became interested in fisheries research because of the economic importance of fishing to Germany. Hensen had considerable understanding of the prerequisites for valid sampling, but the value of his quantitative approach was limited by the general preconceptions shared by most nineteenth-century biologists. Through Hensen's efforts many other biologists were stimulated to undertake quantitative samples, even though the statistical methods for analyzing variation among populations developed only after methods for analyzing variation among individuals had been developed. *** DIRECT SUPPORT *** A8402011 00003. (shrink)
Laplace-transform and Z-transform theories have been applied to analyze the tensile stress?strain curves of a co-woven-knitted (CWK) composite under quasi-static (0.001/s) and high strain rates (up to 2586/s) tension. The transform results were extended to characterize the tension failure and dynamic responses of the CWK composite in the frequency domain. Specifically, the Laplace-transform theory was employed to analyze the stress?strain curves of the CWK composite along 0°, 45° and 90° directions when the composite is assumed to be a (...) continuous system, while the Z-transform theory was used for the discrete system for the composite. From the transformed results, it was found that the stress?strain curves of the CWK composite specimen under different strain rates tension have similar stability behaviours for the Laplace- and Z-transform. For the continuous system, few pole plots are distributed on the left side of the imaginary axis, which means the system is unstable. Nevertheless, the pole-plot distribution is stable before the post-critical deformation of the CWK composite. For the discrete system, most of the poles are located inside the unit circle before post-critical deformation, indicating the system is stable. From the stiffness?time history and fracture morphology, the stability of the pole-plot distribution corresponds to the stiffness stability and fracture uniformity. From continuous and discrete system analyses, it is found that the stress?time and strain-time histories of the CWK composite can be regarded as a digital signal system. Digital signal processing (DSP) methods can be extended to the investigation of the mechanical behaviour of composites. (shrink)
De l'élimination des causes finales par la mathématisation de la physique à l'hypothèse de Kant-Laplace sur la formation du système solaire, puis au développement des sciences de la vie qui déterminent un nouveau champ de rationalité, le thème de la création au 18ème siècle est particulièrement apte à manifester l'évolution des rapports entre sciences, philosophie et métaphysique, jusqu'à son progressif effacement dans des philosophies aussi différentes que celles de Diderot, Hume et Kant.
En contrepoint à son œuvre mathématique et physique — et en relation avec elle — d'Alembert a développé une théorie de la connaissance influencée par Locke et le sensualisme de Condillac, mais centrée avant tout sur une épistémologie de la physique newtonienne. Réaliste, prônant le recours à l'expérience, il est en même temps profondément rationaliste, et même précisément, quoiqu'il s'en défende plutôt, dans la lignée de Descartes, Mais, bien que la Raison soit sa référence fondamentale, à tel point qu'il voudrait (...) fonder sur ses principes les plus évidents toute la science physico-mathématique — c'est-à-dire par excellence la Mécanique —, son programme ne peut être dit cartésien : non seulement il rejette les idées innées, mais il accepte la critique d'une rationalité apparente requise par la considération de faits irréductibles (l'attraction par exemple). Son épistémologie est un réalisme rationnel référé à l'être même de la Nature (la Raison et la Nature se rejoignent en profondeur). C'est en fonction de ces conceptions qu'il accepte ou rejette certaines notions physiques soit ambiguës, soit incertaines. Son rejet du concept de force comme celui de la considération d'une texture intime des corps semblent en faire, par le refus de ce qui ne serait pas directement mesurable, l'annonciateur du positivisme de Laplace et de Comte : une telle interprétation serait inexacte, et d'Alembert considère que la pensée peut parvenir à la connaissance du réel. Si ses conceptions s'élaborent en contrepoint de son activité scientifique, c'est en même temps en lutte et polémique contre la métaphysique, au sein de son engagement philosophique — il est tête de file, avec Voltaire, du « parti philosophique » — et notamment dans l'Encyclopédie. Bien qu'il se réfère, comme Newton, à une Intelligence suprême à l'œuvre dans l'Univers, il n'est pas déiste et affiche bientôt une position sceptique. Sa recherche et son affirmation de l'autonomie des lois de la Nature sont a-thées au sens privatif qui annonce Laplace. Il s'oppose au matérialisme de d'Holbach ou Helvétius, mais il se rapproche peu à peu — et notamment vers 1765, sans doute sous l'influence de Diderot qui rédigea vers cette époque Le rêve de d'Alembert et avec qui il venait de renouer — d'un matérialisme dynamique : mais, ce matérialisme, il ne le professe qu'en privé (essentiellement dans sa correspondance avec Frédéric de Prusse). L'âme est matérielle, et si « le plus simple raisonnement prouve qu'il y a un être éternel » , ce Dieu est matériel, il « n'est que la matière en tant qu'intelligente » , ce qui rejoint la définition du matérialisme donnée par Diderot dans l'Encyclopédie. Toutefois, refusant de se prononcer sur l'en-soi des choses, et sur la nature de la matière elle-même, comme de se définir métaphysiquement, son matérialisme tardif est encore marqué de scepticisme. (shrink)
Naive mereology studies ordinary conceptions of part and whole. Parts, unlike portions, have objective boundaries and many things, such as dances and sermons have temporal parts. In order to deal with Mark Heller's claim that temporal parts "are ontologically no more or less basic than the wholes that they compose," we retell the story of Laplace's Genius, here named "Swifty." Although Swifty processes lots of information very quickly, his conceptual repertoire need not extend beyond fundamental physics. So we attempt (...) to follow Swifty's progress in the acquisition of ordinary concepts such as 'table'. (Puzzles of precision and intrusion appear along the way.) Swifty has to understand what tables are before understanding what temporal portions of tables are. This is one reason for regarding tables as ontologically prior to table portions. intrusion appear along the way.). (shrink)
This paper examines a decision making under uncertainty criterion first introduced by Starr, which differs from the classical criteria. The rationale and properties of this criterion, called the Domain criterion, are discussed and compared with the traditional approaches of Wald, Hurwicz, Savage and Laplace. The computational complexity and usefulness of the criterion are discussed, as well as the underlying decision philosophy.
Uno degli obiettivi piu evidenti della fisica del secolo scorso e stato in apparenza quello di operare la riduzione di tutti i fenomeni ad una spiegazione meccanica. l'esempio classico e quello del "riduzionismo" di laplace e di helmholtz. nello stesso tempo, c'e da notare che vi e pero anche tutto un settore della fisica matematica (quello che ha in fourier il suo capostipite) che assume una posizione sostanzialmente "non-riduzionistica," contraddistinta dalla decisa esclusione di ipotesi relative ad entita non osservabili. (...) (edited). (shrink)
A new viscoelastic creep function that incorporates both the effects of elastically-accommodated grain boundary sliding (GBS) and transient diffusion creep is proposed. It is demonstrated that this model can simultaneously describe both the transient microcreep curves and the shear attenuation/modulus dispersion in a fine-grained (d ? 5 µm) peridotite (olivine + 39 vol. % orthopyroxene) specimen. Low-frequency shear attenuation, , and modulus dispersion, G(?), spectra were measured in a one-atmosphere reciprocating torsion apparatus at temperatures of 1200 ≤ T ≤ 1300°C (...) and frequencies of 10?2.25 ≤ f ≤ 100 Hz. Reciprocating tests were complemented by a series of small stress (τ ? 90 kPa) microcreep experiments at the same temperatures. In contrast to previous models where the parameters of viscoelastic models are derived by fitting the Laplace transform of the creep function to measured attenuation spectra, the parameters are derived solely from the fit of the creep function to the experimental microcreep curves using different published expressions for the relaxation strength of elastically-accommodated GBS. This approach may allow future studies to better link the large dataset of steady-state creep response to the dynamic attenuation behavior. (shrink)