There is a vacuum in three generations of the Grotowski menï¿½s livesï¿½this becomes clear within the filmï¿½s first ten minutes. First Hank (Billy Bob Thornton) wakes alone in the middle of the night, vomits for no apparent reason, and makes a ritual trip to a lonely diner. Next Hankï¿½s boy Sonny (Heath Ledger) perfunctorily screws a prostitute whoï¿½after they have finishedï¿½tells him "you look so sad." Finally, Buckï¿½the eldest played by Peter Boyleï¿½wanders through the house sucking breath from an (...) oxygen tank, adds a new page to his capital punishment scrapbook, and spits racist epithets at some teenagers of color who wander into his yard. (shrink)
What was the source of this great flowering? Much of the credit for it has tended to go to Jacobi and Mendelssohn, who in 1785 began a famous public dispute concerning the question whether or not Lessing had been a Spinozist, as Jacobi alleged Lessing had admitted to him shortly before his death in 1781. But Jacobi and Mendelssohn were both negatively disposed towards Spinoza. In On the Doctrine of Spinoza in Letters to Mr.
Herder has been sufﬁciently neglected in recent times, especially among philosophers, to need a few words of introduction. He lived 1744-1803; he was a favorite student of Kant's, and a student and friend of Hamann's; he became a mentor to the young Goethe, on whose development he exercised a profound inﬂuence; and he worked, among other things, as a philosopher, literary critic, Bible scholar, and translator. As I mentioned, Herder has been especially neglected by philosophers (with two notable (...) exceptions in the Anglophone world: Isaiah Berlin and Charles Taylor). This.. (shrink)
Deductive logic is about the validity of arguments. An argument is valid when its conclusion follows deductively from its premises. Here’s an example: If Alice is guilty then Bob is guilty, and Alice is guilty. Therefore, Bob is guilty. The validity of the argument has nothing to do with what the argument is about. It has nothing to do with the meaning, or content, of the argument beyond the meaning of logical phrases such as if…then. Thus, any argument of the (...) following form (called modus ponens) is valid: If P then Q, and P, therefore Q. Any claims substituted for P and Q lead to an argument that is valid. Probability theory is also content-free in the same sense. This is why deductive logic and probability theory have traditionally been the main technical tools in philosophy of science. (shrink)
This chapter examines four solutions to the problem of many models, and finds some fault or limitation with all of them except the last. The first is the naïve empiricist view that best model is the one that best fits the data. The second is based on Popper’s falsificationism. The third approach is to compare models on the basis of some kind of trade off between fit and simplicity. The fourth is the most powerful: Cross validation testing.
The distinction itself is best explained as follows. At the empirical level (at the bottom), there are curves, or functions, or laws, such as PV = constant the Boyle’s example, or a = M/r 2 in Newton’s example. The first point is that such formulae are actually ambiguous as to the hypotheses they represent. They can be understood in two ways. In order to make this point clear, let me first introduce a terminological distinction between variables and parameters. Acceleration and (...) distance (a and r) are variables in Newton’s formula because they represent quantities that are more or less directly measured. The distinction between what is directly measured and what it is not is to be understood relative the context. All I mean is that values of acceleration and distance are determined independently of the hypothesis, or theory, under consideration. I do not mean that their determination involves no kind of inference at all. For instance, acceleration is the instantaneous change in velocity per unit time, and this is not something that is directly determined from raw data that records the position of the moon at consecutive points in time. It is consistent with that raw data that the motion of the moon is actually discontinuous, so that the moon has no acceleration. So, there are definitely theoretical assumptions make about the moon’s motion that are used to estimate the moon’s acceleration at a particular time. But these assumptions are not unique to Newton’s theory. The same assumptions are also made by the rival hypotheses under consideration. In fact, the existence of quantities such as instantaneous acceleration is only called into question by the far more recent theory of quantum mechanics. Likewise, in the case of Boyle’s law, there is no controversy in viewing the volume of the trapped air as being determined in a way that does not make use of the theory that Boyle is introducing. (shrink)
Wayne Myrvold (2003) has captured an important feature of unified theories, and he has done so in Bayesian terms. What is not clear is whether the virtue of such unification is most clearly understood in terms of Bayesian confirmation. I argue that the virtue of such unification is better understood in terms of other truth-related virtues such as predictive accuracy.
Type 1: This process occurs for half of the population. For this segment of the population, there is 10% chance of developing the disease. There is a test for the disease such that 90% of the people who have the disease in this case will test positive (event E), while the false positive rate is 10%, which means that there is a 10% chance of testing positive for the disease when they do not have the disease.
Kenneth Wilson won the Nobel Prize in Physics in 1982 for applying renormalization group, which he learnt from quantum field theory (QFT), to problems in statistical physics—the induced magnetization of materials (ferromagnetism) and the evaporation and condensation of fluids (phase transitions). See Wilson (1983). The renormalization group got its name from its early applications in QFT. There, it appeared to be a rather ad hoc method of subtracting away unwanted infinities. The further allegation was that the procedure is so horrendously (...) complicated that one cannot see the forest for the trees. The.. (shrink)
Deductive logic is about the property of arguments called validity. An argument has this property when its conclusion follows deductively from its premises. Here’s an example: If Alice is guilty then Bob is guilty, and Alice is guilty. Therefore, Bob is guilty. The important point is that the validity of this argument has nothing to do with the content of the argument. Any argument of the following form (called modus ponens) is valid: If P then Q, and P, therefore Q. (...) Any claims substituted for P and Q lead to an argument that is valid. Probability theory is also content-free. This is why deductive logic and probability theory have traditionally been the main tools in philosophy of science. (shrink)
A and B in signaling games (Lewis 1969). Members of the population, such as our prehistoric pair, are occasionally faced with the following ‘game’. Let one of the players be the receiver and the other the sender. The receiver needs to know whether B is true or not, but only possesses information about whether A is true or not. In some environmental contexts, A is sufficient for B, in others it is not. The sender knows nothing about A or B, (...) but does know that A is sufficient for B in some environments. This is a higher-order signaling game in which both players can benefit from sharing the information that they possess. How does a communication strategy evolve, and is it evolutionarily stable? (shrink)
Puzzle solving in normal science involves a process of accommodation—auxiliary assumptions are changed, and parameter values are adjusted so as to eliminate the known discrepancies with the data. Accommodation is often contrasted with prediction. Predictions happen when one achieves a good fit with novel data without accommodation. So, what exactly is the distinction, and why is it important? The distinction, as I understand it, is relative to a model M and a data set D, where M is a set of (...) equations with adjustable parameters (i. e., M is a family of equations with no free parameters). Definition: Model M predicts data D if and only if either (a) all members of M fit D well, or (b) a particular predictive hypothesis is selected from M by fitting M to other data, and the fitted model fits D well. M merely accommodates D if and only if (i) M does not predict D, and (ii) the predictive hypothesis selected from M using other data does not fit D well. There will be cases in which a model M neither predicts nor accommodates D. These are the cases in which we are willing to say that data falsifies the model. So, the distinction between prediction and accommodation applies only when there is no falsification. (shrink)
Some quantum mechanical phenomena are notoriously hard to explain in causal terms. But what prior motivation is there for seeking a causal explanation in the first place, other than the fact that they have been used successfully to explain unrelated phenomena? The answer is two-fold. First, the agreement of independent measurements of probabilities is the mark of successful causal explanations, which fails in many quantum mechanics examples. But secondly, and more importantly, causal explanations fail to replicate the successful predictions made (...) by quantum mechanics of one phenomena from other phenomena of a very different kind. (shrink)
What is induction? John Stuart Mill (1874, p. 208) defined induction as the operation of discovering and proving general propositions. William Whewell (in Butts, 1989, p. 266) agrees with Mill’s definition as far as it goes. Is Whewell therefore assenting to the standard concept of induction, which talks of inferring a generalization of the form “All As are Bs” from the premise that “All observed As are Bs”? Does Whewell agree, to use Mill’s example, that inferring “All humans are mortal” (...) from the premise that “John, Peter and Paul, etc., are mortal” is an example of induction? The surprising answer is “no”. How can this be? (shrink)
The Value of Good Illustrative Examples: In order to speak as generally as possible about science, philosophers of science have traditionally formulated their theses in terms of elementary logic and elementary probability theory. They often point to real scientific examples without explaining them in detail and/or use artificial examples that fail to fit with intricacies of real examples. Sometimes their illustrative examples are chosen to fit their framework, rather than the science. Frequently these are non-scientific examples, which distances the discussion (...) from its intended target. In the final analysis, philosophical discussions of explanation, confirmation, scientific realism, and the nature of theories are often too abstract, or too imprecise, or too disconnected with real science, to allow scientists to benefit from the discussion. This is a great loss for both parties. In my experience, working scientists are confronted with philosophical issues not only in their role as researchers, but also in their role as tertiary teachers of science. (shrink)
Whewell, William (b Lancaster, England, 24 May 1794; d Cambridge, England, 6 March 1866) Born the eldest son of a carpenter, William Whewell rose to become Master of Trinity College, Cambridge and a central figure in Victorian science. After attending the grammar school at Heversham in Westmorland, Whewell entered Trinity College, Cambridge and graduated Second Wrangler. He became a Fellow of the College in 1817, took his M.A. degree in 1819, and his D.D. degree in 1844.
Der Titel meines Vortrags bezieht sich nicht auf heftige Auseinandersetzungen in der heutigen Hegelrezeption, sondern auf den gleichnamigen Abschnitt der Phänomenologie des Geistes von 1807: “Das geistige Tierreich und der Betrug oder die Sache selbst.” Dieser verhältnismäßig wenig beachtete und womöglich noch weniger verstandene Abschnitt ist meines Erachtens einer der wichtigsten im ganzen Buch. Ich möchte deshalb heute versuchen seine Bedeutung etwas aufzuklären.
This paper concerns a surprisingly sharp disagreement about the nature of ancient Pyrrhonism which first emerges clearly in Kant and Hegel, but which continues in contemporary interpretations. The paper begins by explaining the character of this disagreement, then attempts to adjudicate it in the light of the ancient texts.
Consideration of the German philosophy and political history of the past century might well give the impression, and often does give foreign observers the impression, that liberalism, including in particular commitment to the ideal of free thought and expression, is only skin-deep in Germany. Were not Heidegger's disgust at Gerede (which of course really meant the free speech of the Weimar Republic) and Gadamer's defense of "prejudice" and "tradition" more reflective of the true instincts of German philosophy than, say, the (...) Frankfurt School's heavily Anglophone-influenced championing of free thought and expression? Were not the Kaiser and Nazism more telling of Germany's real political nature than the liberalism of the Weimar Republic (a desperate, ephemeral experiment undertaken in reaction to Germany's disastrous defeat in World War I) or the liberalism of (West) Germany since 1945 (in effect forced on the country by the victorious Allies after World War II)? (shrink)
We create a database of company codes of ethics from firms listed on the Standard & Poor's 500 Index and, separately, a sample of small firms. The SEC believes that "ethics codes do, and should, vary from company to company." Using textual analysis techniques, we measure the extent of commonality across the documents. We find substantial levels of common sentences used by the firms, including a few cases where the codes of ethics are essentially identical. We consider these results in (...) the context of legal statements versus value statements. While legal writing often mandates duplication, we argue that value-based statements should be held to a higher standard of originality. Our evidence is consistent with isomorphic pressures on smaller firms to conform. (shrink)
The simple question, what is empirical success? turns out to have a surprisingly complicated answer. We need to distinguish between meritorious fit and ‘fudged fit', which is akin to the distinction between prediction and accommodation. The final proposal is that empirical success emerges in a theory dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed. ‡This paper was written when I was a visiting fellow at the Center for Philosophy of (...) Science at the University of Pittsburgh; I thank everyone for their support. †To contact the author, please write to: Department of Philosophy, University of Wisconsin–Madison, 5185 Helen C. White Hall, 600 North Park Street, Madison, WI 53706; e-mail: email@example.com. (shrink)
For the purpose of this article, "hermeneutics" means the theory of interpretation, i.e. the theory of achieving an understanding of texts, utterances, and so on (it does not mean a certain twentieth-century philosophical movement). Hermeneutics in this sense has a long history, reaching back at least as far as ancient Greece. However, new focus was brought to bear on it in the modern period, in the wake of the Reformation with its displacement of responsibility for interpreting the Bible from the (...) Church to individual Christians generally. This new focus on hermeneutics occurred especially in Germany.1.. (shrink)
The likelihood theory of evidence (LTE) says, roughly, that all the information relevant to the bearing of data on hypotheses (or models) is contained in the likelihoods. There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the likelihoods alone. These examples suggest that some forms of scientific reasoning, such as the consilience of inductions (Whewell, 1858. In Novum organon renovatum (Part II of the 3rd ed.). The philosophy of (...) the inductive sciences. London: Cass, 1967), cannot be represented within Bayesian and Likelihoodist philosophies of science. (shrink)
Classical mechanics is empirically successful because the probabilistic mean values of quantum mechanical observables follow the classical equations of motion to a good approximation (Messiah 1970, 215). We examine this claim for the one‐dimensional motion of a particle in a box, and extend the idea by deriving a special case of the ideal gas law in terms of the mean value of a generalized force used to define “pressure.” The examples illustrate the importance of probabilistic averaging as a method of (...) abstracting away from the messy details of microphenomena, not only in physics, but in other sciences as well. (shrink)
Herder already very early in his career, in the 1760s, established two vitally important and epoch-making principles in the philosophy of language: that thought is essentially dependent on and bounded by language; and that meanings or concepts should be identified - not with such items as the referents involved, Platonic forms, or empiricist 'ideas' - but with word-usages. What did Herder do for an encore? His Treatise on the Origin of Language from 1772 might seem the natural place to look (...) for an answer to this question (since it is his best known work in the philosophy of language by far), but it is really the wrong place to look, because it temporarily regresses to a more conventional and less philosophically interesting position. However, Herder did succeed in making impressive progress in a broader array of works, namely by striving to identify prima facie problem cases confronting his two principles and to reconcile them with the latter. The main ones which he identified were God, animals, and non-linguistic art. In each of these cases, having initially proposed a reconciliation which did not work, he went on to develop a much more plausible one, indeed one which (at least in the two cases that really require one: animals and non-linguistic art) seems broadly correct. (shrink)
Curriculum 2000 has meant significant change for the post-16 sector. New qualifications have been introduced (e.g. the new Advanced Subsidiary examination) and the number of students involved in education and training post-16 has increased. In this scenario how can the standards of new qualifications, particularly the new Advanced Subsidiary examinations, be compared with those of previous qualifications? One method is to use the prior achievement of candidates (i.e. GCSE results) as a basis for comparison of their results on subsequent qualifications (...) (i.e. A levels and AS). This method of comparability and its limitations will be explored using examples with actual data. (shrink)
Textbooks in quantum mechanics frequently claim that quantum mechanics explains the success of classical mechanics because “the mean values [of quantum mechanical observables] follow the classical equations of motion to a good approximation,” while “the dimensions of the wave packet be small with respect to the characteristic dimensions of the problem.” The equations in question are Ehrenfest’s famous equations. We examine this case for the one-dimensional motion of a particle in a box, and extend the idea deriving a special case (...) of the ideal gas law in terms of the mean value of a generalized force, which has been used in statistical mechanics to define ‘pressure’. The example may be an important test case for recent philosophical theories about the relationship between micro-theories and macro-theories in science. (shrink)
Ketelaar and Ellis have provided a remarkably clear and succinct statement of Lakatosian philosophy of science and have also argued compellingly that the neo-Darwinian theory of evolution fills the Lakatosian criteria of progressivity. We find ourselves in agreement with much of what Ketelaar and Ellis say about Lakatosian philosophy of science, but have some questions about (1) the place of evolutionary psychology in a Lakatosian framework, and (2) the extent to which evolutionary psychology truly predicts new findings.
The theory of fast and frugal heuristics, developed in a new book called Simple Heuristics that make Us Smart (Gigerenzer, Todd, and the ABC Research Group, in press), includes two requirements for rational decision making. One is that decision rules are bounded in their rationality –- that rules are frugal in what they take into account, and therefore fast in their operation. The second is that the rules are ecologically adapted to the environment, which means that they `fit to reality.' (...) The main purpose of this article is to apply these ideas to learning rules–-methods for constructing, selecting, or evaluating competing hypotheses in science, and to the methodology of machine learning, of which connectionist learning is a special case. The bad news is that ecological validity is particularly difficult to implement and difficult to understand. The good news is that it builds an important bridge from normative psychology and machine learning to recent work in the philosophy of science, which considers predictive accuracy to be a primary goal of science. (shrink)
Recent solutions to the curve-fitting problem, described in Forster and Sober (), trade off the simplicity and fit of hypotheses by defining simplicity as the paucity of adjustable parameters. Scott De Vito () charges that these solutions are 'conventional' because he thinks that the number of adjustable parameters may change when the hypotheses are described differently. This he believes is exactly what is illustrated in Goodman's new riddle of induction, otherwise known as the grue problem. However, the 'number of adjustable (...) parameters' is actually a loose way of referring to a quantity that is not language dependent. The quantity arises out of Akaike's theorem in a way that ensures its language invariance. (shrink)
It has become very popular among philosophers to attempt to discredit, or at least set severe limits to, the thesis that there exist conceptual schemes radically different from ours. This fashion is misconceived. Philosophers have attempted to justify it in two main ways: by means of arguments which are a priorist relative to the relevant linguistic and textual evidence (and either independent of or based upon positive theories of meaning, understanding, and interpretation); and by means of arguments which are a (...) posteriorist relative to that evidence. The former approach is misconceived, not only in that its particular arguments fail, but also in principle. The latter approach, while in general the right sort of approach to adopt to the question, arrives at its conclusion only through faulty execution, through misinterpretation of the evidence. Though quite unjustified, philosophers' hostility to the thesis of radically different conceptual schemes is easily explained, namely, in terms of a number of psychologically powerful motives which it subserves. These motives cannot step in to provide the missing justification, however. Instead, they reveal such hostility in an even shadier light. (shrink)