Science, in general, and chemistry in particular advances by methods that are difficult to codify. The availability of theories (models) and instrumentation play an important role but indefinable motivations to study individual phenomena are also involved. The area of chromium photophysics has a rich history that spans 150 years. A case history of the progression from the natural history stage to its present state reveals the way in which several factors that are common to much physical science research interact.
The aim of this highly original book is twofold: to explain the reconciliation of religion and politics in the work of John Locke, and to explore the relevance of that reconciliation for politics in our own time. Confronted with deep social divisions over ultimate beliefs Locke sought to unite society in a single liberal community. Reason could identify divine moral laws that would be acceptable to members of all cultural groups, thereby justifying the authority of government. Greg Forster demonstrates (...) that Locke's theory is liberal and rational but also moral and religious, providing an alternative to the two extremes of religious fanaticism and moral relativism. This fresh new account of Locke's thought will appeal to specialists and advanced students across philosophy, political science, and religious studies. (shrink)
The central problem with Bayesian philosophy of science is that it cannot take account of the relevance of simplicity and unification to confirmation, induction, and scientific inference. The standard Bayesian folklore about factoring simplicity into the priors, and convergence theorems as a way of grounding their objectivity are some of the myths that Earman's book does not address adequately. 1Review of John Earman: Bayes or Bust?, Cambridge, MA. MIT Press, 1992, £33.75cloth.
Herder has been sufﬁciently neglected in recent times, especially among philosophers, to need a few words of introduction. He lived 1744-1803; he was a favorite student of Kant's, and a student and friend of Hamann's; he became a mentor to the young Goethe, on whose development he exercised a profound inﬂuence; and he worked, among other things, as a philosopher, literary critic, Bible scholar, and translator. As I mentioned, Herder has been especially neglected by philosophers (with two notable (...) exceptions in the Anglophone world: Isaiah Berlin and Charles Taylor). This.. (shrink)
Sober (1984) has considered the problem of determining the evidential support, in terms of likelihood, for a hypothesis that is incomplete in the sense of not providing a unique probability function over the event space in its domain. Causal hypotheses are typically like this because they do not specify the probability of their initial conditions. Sober's (1984) solution to this problem does not work, as will be shown by examining his own biological examples of common cause explanation. The proposed solution (...) will lead to the conclusion, contra Sober, that common cause hypotheses explain statistical correlations and not matchings between event tokens. (shrink)
Charles Peirce is often credited for being among the first, perhaps even the first, to develop a scientific metaphysics of indeterminism. After rejecting the received view that Peirce developed his views from Darwin and Maxwell, I argue that Peirce's view results from his synthesis of Immanuel Kant's critical philosophy and George Boole's contributions to formal logic. Specifically, I claim that Kant's conception of the laws of logic as the basis for his architectonic, when combined with Boole's view of probability, yields (...) Peirce's metaphysics of probabilistic laws. Indeterminism provides, therefore, an excellent illustration of how Peirce attempted to use logic to clarify metaphysical problems.Since everyone must have conceptions of things in general, it is most important that they should be carefully constructed. I shall enter into no criticism of the different methods of metaphysical research, but shall merely say that in the opinions of several great thinkers, the only successful mode yet lighted upon is that of adopting our logic as our metaphysics. (W1: 490, 1866)2. (shrink)
Recent solutions to the curve-fitting problem, described in Forster and Sober (), trade off the simplicity and fit of hypotheses by defining simplicity as the paucity of adjustable parameters. Scott De Vito () charges that these solutions are 'conventional' because he thinks that the number of adjustable parameters may change when the hypotheses are described differently. This he believes is exactly what is illustrated in Goodman's new riddle of induction, otherwise known as the grue problem. However, the 'number of (...) adjustable parameters' is actually a loose way of referring to a quantity that is not language dependent. The quantity arises out of Akaike's theorem in a way that ensures its language invariance. (shrink)
Richard Rorty’s attempts to defend liberalism by appeal to pragmatism fail primarily as a result of his conflation of epistemological and political concepts. It is this confusion that leads him to defend unpalatable political views. Once the question of pragmatism is properly distinguished from the question of liberalism, it becomes clear that criticisms of Rorty’s politics have no bearing on his views of philosophy and, similarly, that acceptance of Rorty’s critique of philosophy does not commit pragmatists to his political views.
Sharvy’s puzzle concerns a situation in which common knowledge of two parties is obtained by repeated observation each of the other, no fixed point being reached in finite time. Can a fixed point be reached?
There is a vacuum in three generations of the Grotowski menï¿½s livesï¿½this becomes clear within the filmï¿½s first ten minutes. First Hank (Billy Bob Thornton) wakes alone in the middle of the night, vomits for no apparent reason, and makes a ritual trip to a lonely diner. Next Hankï¿½s boy Sonny (Heath Ledger) perfunctorily screws a prostitute whoï¿½after they have finishedï¿½tells him "you look so sad." Finally, Buckï¿½the eldest played by Peter Boyleï¿½wanders through the house sucking breath from an (...) oxygen tank, adds a new page to his capital punishment scrapbook, and spits racist epithets at some teenagers of color who wander into his yard. (shrink)
Machine generated contents note: 1. Rationality, idealism, monism, and beyond Michael Della Rocca; 2. Kant's idea of the unconditioned and Spinoza's the fourth antinomy and the ideal of pure reason Omri Boehm; 3. The question is whether a purely apparent person is possible Karl Ameriks; 4. Herder and Spinoza Michael Forster; 5. Goethe's Spinozism Eckart Förster; 6. Fichte on freedom: the Spinozistic background Allen Wood; 7. Fichte on the consciousness of Spinoza's God Johannes Haag; 8. Spinoza in Schelling's early (...) conception of intellectual intuition Dalia Nassar; 9. Schelling's philosophy of identity and Spinoza's ethica more geometrico Michael Vater; 10. 'Omnis determinatio est negatio' - determination, negation, and self-negation in Spinoza, Kant, and Hegel Yitzhak Y. Melamed; 11. Thought and metaphysics: Hegel's critical reception of Spinoza Dean Moyar; 12. Two models of metaphysical inferentialism: Spinoza and Hegel Gunnar Hinricks; 13. Trendelenburg and Spinoza Fred Beiser; 14. Replies on behalf of Spinoza Don Garrett. (shrink)
Abstract Ramsey, Stick and Garon (1991) argue that if the correct theory of mind is some parallel distributed processing theory, then folk psychology must be false. Their idea is that if the nodes and connections that encode one representation are causally active then all representations encoded by the same set of nodes and connections are also causally active. We present a clear, and concrete, counterexample to RSG's argument. In conclusion, we suggest that folk psychology and connectionism are best understood as (...) complementary theories. Each has different limitations, yet each will co?evolve with the other in an overlapping domain of ?normal? psychology. (shrink)
Herder already very early in his career, in the 1760s, established two vitally important and epoch-making principles in the philosophy of language: that thought is essentially dependent on and bounded by language; and that meanings or concepts should be identified - not with such items as the referents involved, Platonic forms, or empiricist 'ideas' - but with word-usages. What did Herder do for an encore? His Treatise on the Origin of Language from 1772 might seem the natural place to look (...) for an answer to this question (since it is his best known work in the philosophy of language by far), but it is really the wrong place to look, because it temporarily regresses to a more conventional and less philosophically interesting position. However, Herder did succeed in making impressive progress in a broader array of works, namely by striving to identify prima facie problem cases confronting his two principles and to reconcile them with the latter. The main ones which he identified were God, animals, and non-linguistic art. In each of these cases, having initially proposed a reconciliation which did not work, he went on to develop a much more plausible one, indeed one which (at least in the two cases that really require one: animals and non-linguistic art) seems broadly correct. (shrink)
The simple question, what is empirical success? turns out to have a surprisingly complicated answer. We need to distinguish between meritorious fit and ‘fudged fit', which is akin to the distinction between prediction and accommodation. The final proposal is that empirical success emerges in a theory dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed. ‡This paper was written when I was a visiting fellow at the Center for Philosophy of (...) Science at the University of Pittsburgh; I thank everyone for their support. †To contact the author, please write to: Department of Philosophy, University of Wisconsin–Madison, 5185 Helen C. White Hall, 600 North Park Street, Madison, WI 53706; e-mail: email@example.com. (shrink)
Puzzle solving in normal science involves a process of accommodation—auxiliary assumptions are changed, and parameter values are adjusted so as to eliminate the known discrepancies with the data. Accommodation is often contrasted with prediction. Predictions happen when one achieves a good fit with novel data without accommodation. So, what exactly is the distinction, and why is it important? The distinction, as I understand it, is relative to a model M and a data set D, where M is a set of (...) equations with adjustable parameters (i. e., M is a family of equations with no free parameters). Definition: Model M predicts data D if and only if either (a) all members of M fit D well, or (b) a particular predictive hypothesis is selected from M by fitting M to other data, and the fitted model fits D well. M merely accommodates D if and only if (i) M does not predict D, and (ii) the predictive hypothesis selected from M using other data does not fit D well. There will be cases in which a model M neither predicts nor accommodates D. These are the cases in which we are willing to say that data falsifies the model. So, the distinction between prediction and accommodation applies only when there is no falsification. (shrink)
Curve-fitting typically works by trading off goodness-of-fit with simplicity, where simplicity is measured by the number of adjustable parameters. However, such methods cannot be applied in an unrestricted way. I discuss one such correction, and explain why the exception arises. The same kind of probabilistic explanation offers a surprising resolution to a common-sense dilemma.
It is shown that, according to NF, many of the assertions of ordinal arithmetic involving the T-function which is peculiar to NF turn out to be equivalent to the truth-in-certain-permutation-models of assertions which have perfectly sensible ZF-style meanings, such as: the existence of wellfounded sets of great size or rank, or the nonexistence of small counterexamples to the wellfoundedness of ∈. Everything here holds also for NFU if the permutations are taken to fix all urelemente.
Machine generated contents note: List of abbreviations; Preface; 1. Nominalism as demonic doctrine; 2. Logic, philosophy and the special sciences; 3. Continuity and the problem of universals; 4. Continuity and meaning: Peirce's pragmatic maxim; 5. Logical foundations of Peirce's pragmatic maxim; 6. Experience and its role in inquiry; 7. Scientific method as self-corrective - Peirce's view of the problem of knowledge; 8. The unity of Peirce's theories of truth; 9. Order from chaos: Peirce's evolutionary cosmology; 10. A universe of chance: (...) foundations of Peirce's indeterminism; 11. From inquiry to ethics: the pursuit of truth as moral ideal. (shrink)
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike , which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
I. In this paper I want to sketch an account of the role of skepticism in Kant's critical philosophy.1 The critical philosophy set forth in the Critique of Pure Reason (henceforth: the Critique) grew from and responds to a complex set of philosophical concerns. Among these two of special importance are concerns to address skepticism and to develop a reformed metaphysics. This much is widely recognized. However, it is a fundamental thesis of this paper that those projects belong tightly together, (...) in the following sense: The types of skepticism which really originated and motivate the critical philosophy are ones which target metaphysics; and what originated and motivates the critical philosophy's reform of metaphysics is above all the goal of enabling it to withstand skepticism. (shrink)
Consideration of the German philosophy and political history of the past century might well give the impression, and often does give foreign observers the impression, that liberalism, including in particular commitment to the ideal of free thought and expression, is only skin-deep in Germany. Were not Heidegger's disgust at Gerede (which of course really meant the free speech of the Weimar Republic) and Gadamer's defense of "prejudice" and "tradition" more reflective of the true instincts of German philosophy than, say, the (...) Frankfurt School's heavily Anglophone-influenced championing of free thought and expression? Were not the Kaiser and Nazism more telling of Germany's real political nature than the liberalism of the Weimar Republic (a desperate, ephemeral experiment undertaken in reaction to Germany's disastrous defeat in World War I) or the liberalism of (West) Germany since 1945 (in effect forced on the country by the victorious Allies after World War II)? (shrink)
This chapter examines four solutions to the problem of many models, and finds some fault or limitation with all of them except the last. The first is the naïve empiricist view that best model is the one that best fits the data. The second is based on Popper’s falsificationism. The third approach is to compare models on the basis of some kind of trade off between fit and simplicity. The fourth is the most powerful: Cross validation testing.
What is induction? John Stuart Mill (1874, p. 208) defined induction as the operation of discovering and proving general propositions. William Whewell (in Butts, 1989, p. 266) agrees with Mill’s definition as far as it goes. Is Whewell therefore assenting to the standard concept of induction, which talks of inferring a generalization of the form “All As are Bs” from the premise that “All observed As are Bs”? Does Whewell agree, to use Mill’s example, that inferring “All humans are mortal” (...) from the premise that “John, Peter and Paul, etc., are mortal” is an example of induction? The surprising answer is “no”. How can this be? (shrink)
Deductive logic is about the validity of arguments. An argument is valid when its conclusion follows deductively from its premises. Here’s an example: If Alice is guilty then Bob is guilty, and Alice is guilty. Therefore, Bob is guilty. The validity of the argument has nothing to do with what the argument is about. It has nothing to do with the meaning, or content, of the argument beyond the meaning of logical phrases such as if…then. Thus, any argument of the (...) following form (called modus ponens) is valid: If P then Q, and P, therefore Q. Any claims substituted for P and Q lead to an argument that is valid. Probability theory is also content-free in the same sense. This is why deductive logic and probability theory have traditionally been the main technical tools in philosophy of science. (shrink)
This article leverages insights from the body of Adam Smith’s work, including two lesser-known manuscripts—the Theory of Moral Sentiments and Lectures in Jurisprudence —to help answer the question as to how companies should morally prioritize corporate social responsibility (CSR) initiatives and stakeholder claims. Smith makes philosophical distinctions between justice and beneficence and perfect and imperfect rights, and we leverage those distinctions to speak to contemporary CSR and stakeholder management theories. We address the often-neglected question as to how far a company (...) should be expected to go in pursuit of CSR initiatives and we offer a fresh perspective as to the role of business in relation to stakeholders and to society as a whole. Smith’s moral insights help us to propose a practical framework of legitimacy in stakeholder claims that can help managers select appropriate and responsible CSR activities. (shrink)
We create a database of company codes of ethics from firms listed on the Standard & Poor’s 500 Index and, separately, a sample of small firms. The SEC believes that “ethics codes do, and should, vary from company to company.” Using textual analysis techniques, we measure the extent of commonality across the documents. We find substantial levels of common sentences used by the firms, including a few cases where the codes of ethics are essentially identical. We consider these results in (...) the context of legal statements versus value statements. While legal writing often mandates duplication, we argue that value-based statements should be held to a higher standard of originality. Our evidence is consistent with isomorphic pressures on smaller firms to conform. (shrink)