A venerable tradition in the metaphysics of science commends ontological reduction: the practice of analysis of theoretical entities into further and further proper parts, with the understanding that the original entity is nothing but the sum of these. This tradition implicitly subscribes to the principle that all the real action of the universe (also referred to as its "causation") happens at the smallest scales-at the scale of microphysics. A vast majority of metaphysicians and philosophers of science, covering a wide swath (...) of the spectrum from reductionists to emergentists, defend this principle. It provides one pillar of the most prominent theory of science, to the effect that the sciences are organized in a hierarchy, according to the scales of measurement occupied by the phenomena they study. On this view, the fundamentality of a science is reckoned inversely to its position on that scale. This venerable tradition has been justly and vigorously countered-in physics, most notably: it is countered in quantum theory, in theories of radiation and superconduction, and most spectacularly in renormalization theories of the structure of matter. But these counters-and the profound revisions they prompt-lie just below the philosophical radar. This book illuminates these counters to the tradition principle, in order to assemble them in support of a vaster (and at its core Aristotelian) philosophical vision of sciences that are not organized within a hierarchy. In so doing, the book articulates the principle that the universe is active at absolutely all scales of measurement. This vision, as the book shows, is warranted by philosophical treatment of cardinal issues in the philosophy of science: fundamentality, causation, scientific innovation, dependence and independence, and the proprieties of explanation. (shrink)
This article aims to show that fundamentality is construed differently in the two most prominent strategies of analysis we find in physical science and engineering today: (1) atomistic, reductive analysis and (2) Systems analysis. Correspondingly, atomism is the conception according to which the simplest (smallest) indivisible entity of a certain kind is most fundamental; while systemism, as will be articulated here, is the conception according to which the bonds that structure wholes are most fundamental, and scale and/or constituting entities are (...) of no significance whatsoever for fundamentality. Accordingly, atomists maintain that the basic entities—the atoms—are fundamental, and together with the “external” interactions among them, are sufficient for illuminating all the features and behaviors of the wholes they constitute; whereas systemists proclaim that it is instead structural qualities of systems, that flow from internal relations among their constituents and translate directly into behaviors, that are fundamental, and by themselves largely (if not entirely) sufficient for illuminating the features and behaviors of the wholes thereby structured. Systemism, as will be argued, is consistent with the nonexistence of a fundamental “level” of nondecomposable entities, just as it is consistent with the existence of such a level. Still, systemism is a conception of the fundamental in quite different, but still ontological terms. Systemism can serve the special sciences—the social sciences especially—better than the conception of fundamentality in terms of atoms. Systemism is, in fact, a conception of fundamentality that has rather different uses—and importantly, different resonances. This conception of fundamentality makes contact with questions pertaining to natural kinds and their situation in the metaphysics of the special sciences—their situation within an order of autonomous sciences. The controversy over fundamentality is evident in the social sciences too, albeit somewhat imperfectly, in the terms of debate between methodological individualists and functionalists/holists. This article will thus clarify the difference between systemism and holism. (shrink)
I shall endeavor to show that every physical theory since Newton explains without drawing attention to causes–that, in other words, physical theories as physical theories aspire to explain under an ideal quite distinct from that of causal explanation. If I am right, then even if sometimes the explanations achieved by a physical theory are not in violation of the standard of causal explanation, this is purely an accident. For physical theories, as I will show, do not, as such, aim at (...) accommodating the goals or aspirations of causal explanation. This will serve as the founding insight for a new theory of explanation, which will itself serve as the cornerstone of a new theory of scientific method. (shrink)
Traditional articulations of the conception of dirty hands, as the doing of wrong in order to do right, invite construals of the issues raised thereby as mired in conceptual confusions and inconsistencies, and moreover as generating unproductive discussions of the scope of the proposed notion itself. The status of the concept of dirty hands is thus precarious, in spite of its provenance in the work of political thinkers such as Machiavelli. This essay articulates one nonparadoxical conception of dirty hands, as (...) the uncomfortable phenomenology of agents authorized to act on behalf of others, when they are called upon to do things that, while morally correct, are also personally or morally distasteful. The key point is that the dirt of action remains on these agents’ hands, even if moral responsibility for the relevant actions should be conceived as falling on the persons in whose name they take the said action. This articulation opens up for investigation questions about the phenomenology of authorized agency, as well as the moral labor involved in leadership and in other contexts of authorized distasteful action. These are questions of social ontology. Here there is important philosophical work that remains to be done. (shrink)
Social sciences face a well-known problem, which is an instance of a general problem faced as well by psychological and biological sciences: the problem of establishing their legitimate existence alongside physics. This, as will become clear, is a problem in metaphysics. I will show how a new account of structural explanations, put forward by Frank Jackson and Philip Pettit, which is designed to solve this metaphysical problem with social sciences in mind, fails to treat the problem in any importantly new (...) way. Then I will propose a more modest approach, and show how it does not deserve the criticism directed at a prototype by Jackson and Pettit. (shrink)
This essay offers a motivational conception of solidarity that can be employed across the entire range of sciences and humanities, while also filling a gap in the motivational spectrum conceived by decision theorists and economists—and expanding the two-part division between altruistic and selfish motivations into a tripartite analysis that suggests a spectrum instead. According to the present proposal, solidarity is a condition of action-readiness on behalf of a group or its interests. The proposal will admit of measuring the extent to (...) which Prisoner’s Dilemmas are overcome in real life via acts of solidarity. (shrink)
This paper documents a wide range of nonreductive scientific treatments of phenomena in the domain of physics. These treatments strongly resist characterization as explanations of macrobehavior exclusively in terms of behavior of microconstituents. For they are treatments in which macroquantities are cast in the role of genuine and irreducible degrees of freedom. One is driven into reductionism when one is not cultivated to possess an array of distinctions rich enough to let things be what they are. In contrast, making the (...) decisive distinction has an illuminating and liberating effect because it lets the concrete occurrence stand forth for what it is. We understand it not in terms of a decipherment, but on its own terms. –Robert Sokolowski (“Making Distinctions”, 1992). (shrink)
In spite of its infinite expectation value, the St. Petersburg game is not only a gamble without supply in the real world, but also one without demand at apparently very reasonable asking prices. We offer a rationalizing explanation of why the St. Petersburg bargain is unattractive on both sides (to both house and player) in the mid-range of prices (finite but upwards of about $4). Our analysis – featuring (1) the already-established fact that the average of finite ensembles of the (...) St. Petersburg game grows with ensemble size but is unbounded, and (2) our own simulation data showing that the debt-to-entry fee ratio rises exponentially – explains why both house and player are quite rational in abstaining from the St. Petersburg game. The house will be unavoidably (and intentionally) exposed to very large ensembles (with very high averages, and so very costly to them), while contrariwise even the well-heeled player is not sufficiently capitalized (as our simulation data reveals) to be able to capture the potential gains from large-ensemble play. (Smaller ensembles, meanwhile, enjoy low means, as others have shown, and so are not worth paying more than $4 to play, even if a merchant were to offer them at such low prices per trial.) Both sides are consequently rational in abstaining from entry into the St. Petersburg market in the mid-range of asking prices. We utilize the concept of capitalization vis-à-vis a gamble to make this case. Classical analyses of this question have paid insufficient attention to the question of the propriety of using expected values to assess the St. Petersburg gamble. And extant analyses have not noted the average-maximum-debt-before-breaking-even figures, and so are incomplete. (shrink)
This paper argues that the doctrines of determinism and supervenience, while logically independent, are importantly linked in physical mechanics—and quite interestingly so. For it is possible to formulate classical mechanics in such a way as to take advantage of the existence of mathematical devices that represent the advance of time—and which are such as to inspire confidence in the truth of determinism—in order to prevent violation of supervenience. It is also possible to formulate classical mechanics-and to do so in an (...) observationally equivalent, and thus equally empirically respectable, way—such that violations of supervenience are (on the one hand) routine, and (on the other hand) necessary for achieving complete descriptions of the motions of mechanical systems—necessary, therefore, for achieving a deterministic mechanical theory. Two such formulations—only one of which preserves supervenience universally—will conceive of mechanical law in quite different ways. What's more, they will not admit of being extended to treat thermodynamical questions in the same way. Thus we will find that supervenience is a contingent matter, in the following rather surprising and philosophically interesting way: we cannot in mechanics separate our decisions to conceive of physical law in certain ways from our decisions to treat macroscopic quantities in certain ways. (shrink)
The twin conceptions of (1) natural law as causal structure and (2) explanation as passage from phenomenon to cause, are two sides of a certain philosophical coin, to which I shall offer an alternative – Humean – currency. The Humean alternative yokes together a version of the regularity conception of law and a conception of explanation as passage from one regularity, to another which has it as an instance but of which it is not itself an instance. I will show (...) that the regularity conception of law is the basis of a distinguished branch of physical mechanics; thus the Humean conception of law, like its better-loved rival, enjoys the support of a venerated tradition in mechanical theory – in fact, that strand which culminates in quantum theory. I shall also offer an account of explanatory asymmetry, a natural companion to the Humean conception of explanation as passage from one regularity to another of greater scope, as an alternative to van Fraassen’s unsatisfactory account. My account of asymmetry is just as free of reliance on context as it is free of reliance on cause. I shall thus proclaim that explanatory asymmetry is at once a reality deserving of philosophical treatment – one not to be given over to the care of psychology or linguistics – and at the same time susceptible of an account worthy of Hume. (shrink)
THOMAS SCHELLING taught us that in ordinary human affairs, conflict and common interest are ubiquitously intertwined. For when it comes to variety, the occasion of pure conflict (known to some of its friends as the zerosum game) is as under-represented in human affairs as the occasion of undiluted common interest (known as the pure coordination game). The undiluted extremes are the exceptions, when it comes to counting kinds, while the mixed-motive kind of occasion is the rule. Things look a bit (...) different, however, when one looks at sheer numbers of true-life occasions, as I will explain. Schelling also taught us that in the diverse space of mixed-motive affairs, intermediate between pure conflict and pure coordination, there is more true-life collaborative behavior—more trustings and promise-keepings—than our prescriptive decision theories can accommodate, never mind explain. This collaborative behavior, as long ago another Thomas—Thomas Hobbes—was painfully aware, requires some explaining precisely because of the presence of ineliminable conflict of interest. The problem confronting those who aspire to explain such collaboration is to identify weighty motivations in its favor, in the face of the weighty, but countervailing motivations. What shall concern me here is not the collaboration that transpires in the face of conflict of interest, but the success which meets us more than halfway in those limiting cases of pure coordination, so as (for example) not to collide in roadways and corridors. For in sheer quantity, the occasions for pure coordination outnumber the occasions for anything else perhaps a hundredfold. (shrink)
This paper distinguishes two conceptions of collectivity, each of which tracks the targets of classification according to their aetiology. Collectivities falling under the first conception are founded on (more-or-less) explicit negotiations amongst the members who are known to one another personally. Collectivities falling under the second (philosophically neglected) conception are founded – at least initially – purely upon a shared conception of “we”, very often in the absence of prior acquaintance and personal interaction. Th e paper argues that neglect of (...) collectivities of the second kind renders certain social phenomena (for example, sense of place and certain kinds of conflicted loyalty) inexplicable or invisible. And the paper also stresses that a conception referring to the second kind of collectivity will put us in position to revitalize a variety of important questions, including: Which conception of collectivity best serves the needs of a theory of justice? Th e paper will contrast the distinction between these two conceptions with the classical Gemeinschaft/Gesellschaft distinction, as well as with the more recent attempts to articulate differences between groups according to membership-structuring norms. (shrink)
Science seems generally to aim at truth. And governmental support of science is often premised on the instrumental value of truth in service of advancing our practical objectives, both as individuals and as communities, large and small. While there is some political expediency to this view, it is not correct. The value of truth is nowise that it helps us achieve our aims. In fact, just the contrary: truth deserves to be believed only on the condition that its claim upon (...) us is orthogonal to any utility it might have in the service of (any and all) practical ends. (shrink)
Dynamical-systems analysis is nowadays ubiquitous. From engineering (its point of origin and natural home) to physiology, and from psychology to ecology, it enjoys surprisingly wide application. Sometimes the analysis rings decisively false—as, for example, when adopted in certain treatments of historical narrative; other times it is provocative and controversial, as when applied to the phenomena of mind and cognition. Dynamical systems analysis (or “Systems” with a capital “S,” as I shall sometimes refer to it) is simply a tool of analysis. (...) It mobilizes the language and mathematical technology of differential equations, and brings into play the distinctive concepts of equilibrium and attractor, as well as gain, coupling and neighborhood, that are not obviously proprietary property of any particular domain of objects or regime in the world. It is the ecumenical language of engineers, universal in scope. Still, Systems, as a mode of analysis, itself stands in need of clarification. Once that clarity has been attained, one can then ask: are there limitations or bounds on proper application of Systems analysis, that are themselves premised upon considerations internal to the analysis itself? This is one of several questions to which the present essay is devoted. Before it can be attempted, however, we shall require some groundwork that clarifies the mode of analysis that is Systems—the family of analyses to which it belongs. This will begin to bring out (among other things) the precise difference of subject matter between biology and physics. And the ecumenicality of Systems analysis is bound to have its own distinctive commitments, as we shall see. Practitioners attuned to the signal characteristics of Systems analysis—characteristics that set it apart—proclaim its many advantages. (shrink)
The goal of the essay is to articulate some beginnings for an empirical approach to the study of agency, in the firm conviction that agency is subject to scientific scrutiny, and is not to be abandoned to high-brow aprioristic philosophy. Drawing on insights from decision analysis, game theory, general dynamics, physics and engineering, this essay will examine the diversity of planning phenomena, and in that way take some steps towards assembling rudiments for the budding science, in the process innovating (parts (...) of) a technical vocabulary. The key is focus upon the organization of effort in time. This paper categorizes forms of organization of effort in time, and yields an analysis of both individual agency and coalitions of agents as forms of effort organized in time. Finally, it articulates precise questions pertaining to the natural (evolutionary) history of forms of agency (once upon a time referred to as ‘Will’) that we now find on the ground. (shrink)
A superselection rule advanced in the course of a quantum-mechanical treatment of some phenomenon is an assertion to the effect that the superposition principle of quantum mechanics is to be restricted in the application at hand. Superselection accounts of measurement all have in common a decision to represent the indicator states of detectors by eigenspaces of superselection operators named in a superselection rule, on the grounds that the states in question are states of a so-called classical quantity and therefore not (...) subject to quantum interference effects. By this strategy superselectionists of measurement expect to dispense with use of projection postulates in treatments of measurement. I shall argue that superselection accounts of measurement are self-contradictory, and that treatments of infinite systems, if they can avoid the contradiction, are not true superselection accounts. (shrink)
This chapter focuses on finding better ways to conceptualize precaution. Precaution has now become an established principle of environmental governance, although it has not been distinguished from conventional risk assessment. It has been considered by some as the antithesis of risk assessment in the sense that it is done to avoid serious potential harm, without scientific certainty as to the likelihood, magnitude, or causation of that harm. The first and foremost task of this chapter is to show that these concepts (...) are non-overlapping. It also elaborates on the reasons why precaution should be taken seriously, and that, on the flipside, traditional risk assessment, or cost-benefit analysis, does not take precaution seriously because of its pervasive nature. This fact represents the ascendancy of a decision-making calculus that grew out of twentieth-century British moral theory and culminated in 1944 with the presentation of the doctrine now familiarly known as the theory of expected utility. (shrink)
Social sciences face a well-known problem, which is an instance of a general problem faced as well by psychological and biological sciences: the problem of establishing their legitimate existence alongside physics. This, as will become clear, is a problem in metaphysics. I will show how a new account of structural explanations, put forward by Frank Jackson and Philip Pettit, which is designed to solve this metaphysical problem with social sciences in mind, fails to treat the problem in any importantly new (...) way. Then I will propose a more modest approach, and show how it does not deserve the criticism directed at a prototype by Jackson and Pettit. (shrink)
We seek to illuminate the prevalence of cooperation among biologically unrelated individuals via an analysis of agency that recognizes the possibility of bonding and challenges the common view that agency is invariably an individual-level affair. Via bonding, a single individual’s behavior patterns or programs are altered so as to facilitate the formation, on at least some occasions, of a larger entity to whom is attributable the coordination of the component entities. Some of these larger entities will qualify as agents in (...) their own right, even when the comprising entities also qualify as agents. In light of the many possibilities that humans actually enjoy for entering into numerous bonding schemes, and the extent to which they avail themselves of these possibilities, there is no basis for the assumption that cooperative behavior must ultimately emerge as either altruistic or self-interested; it can instead be the product of collective agency. (shrink)
The principle that causes always render their effects more likely is fundamental to the enterprise of reducing facts of causation to facts about (objective) chances. This reductionist enterprise faces famous difficulties in accommodating common-sense intuitions about causal processes, if it insists on cashing out causal processes in terms of streams of events in which every event that belongs to the stream is a cause of the adjoining event downstream of it. I shall propose modifications to this way of cashing out (...) causal processes, still well within the reductionist faith. These modifications will allow the reductionist to handle processes successfully, on the assumption that the reductionist proposal is itself otherwise satisfactory. I shall then argue that the reductionist enterprise lies squarely behind the Theory of Relativity, and so has all the confirmatory weight of Relativity behind it. However this is not all good news for reductionists. For throughout I shall simply assume that the reductionist proposal, to the effect that causes are just chance-raisers, is correct. And I shall sidestep problems with that proposal as such. And so I shall show that, if in the end we find the reductionist proposal unsatisfactory, it cannot be on grounds of its treatment of causal processes as such. Thus, while I shall argue that causal processes pose no extra trouble for reductionists, I shall be making a case that all the action between reductionists and their opponents should be focused upon the proposal to reduce the two-term causal relation itself to relations amongst probabilities. (shrink)
Against Border Patrols.Mariam Thalos - 2017 - In Maarten Boudry and Massimo Pigliucci (ed.), Science Unlimited? Challenges of Scientism. Chicago: pp. 283–301.details
Can victims of the oracle paradox, which is known primarily through its unexpected hanging and surprise examination versions, extricate themselves from their difficulties of reasoning? No. For they do not, contrary to recent opinion, commit errors of fallacious elimination. As I shall argue, the difficulties of reasoning faced by these victims do not originate in the domain of concepts, propositions and their entailment relations; nor do they result from misapprehensions about limitations on what can be known. The difficulties of reasoning (...) flow, instead, from conflicts that arise in the practical dimension of life. The oracle paradox is in this way more evocative of problems faced in the theory of computation than it is like the celebrated Russell’s paradox. (shrink)
Chemistry possesses a distinctive theoretical lens—a distinctive set of theoretical concerns regarding the dynamics and transformations of a perplexing variety of organic and nonorganic substances—to which it must be faithful. Even if it is true that chemical facts bear a special (reductive) relationship to physical facts, nonetheless it will always still be true that the theoretical lenses of the two disciplines are distinct. This has consequences for how chemists pursue their research, as well as how chemistry should be taught.
This is my impersonation of a philosopher working from home, which aims at making lively a few worthy philosophical questions. The old is new again, as each generation confronts its own challenges and demons, in its own context.
Decision theory cannot be a purely formal theory, free of all metaphysical assumptions and ascertainments. It must instead rely upon the end user for the wisdom it takes to prime the decision formalism – with principles and assumptions about the metaphysics of the application context – so that the formalism in its turn can generate good advice. Appreciating this idea is fundamental to understanding the true rivalry between evidential decision theory (EDT) and causal decision theory (CDT) in specific cases. I (...) shall argue that no decision theory can deliver a verdict unless assumptions are made about the degrees of freedom in the context of decision, that EDT and CDT disagree fundamentally about how to diagnose the degrees of freedom in any given situation, and that from this fundamental disagreement flow their surface disagreements in iconic cases. (shrink)
This paper challenges Gardiner’s (2006) contention that his Core Precautionary Principle (CPP) “tracks our [precautionary] intuitions about some core cases, including the paradigmatic environmental ones”. And instead sketches a handful of precautionary practices in navigational systems that (collectively) do better at tracking these “intuitions”. There is no way of measuring these diverse practices as to relative weakness or strength against each other. And ultimately it makes little sense to talk about precautionary principles on any strength scale—as Gardiner (2006) aspires to (...) do (and in such a way as locates CPP firmly in the middle). Indeed, it makes little sense to proclaim that precaution can be captured in any one decision rule or formula because, as will be illustrated here, precaution is many nonoverlapping things, each of which is appropriate in different stages of the navigation process, from confrontation of a dilemma to ultimate resolution of it. (shrink)
This essay identifies foundational questions, all metaphysical in character, which must be answered before the enterprise of epistemology proper can begin to prosper, and in the process draws attention to fundamental conflicts between the demands of epistemology and the demands of prudence. It concludes that knowledge is not, as such, a directive of prudence, and thus that the enterprise of knowledge does not fall under the category of what is practically required.
This essay is not concerned exclusively with procedure. In addition to developing and promoting an alternative methodology, I will also be utilizing it to defend, systematically, an unfashionable proposition nowadays. This is the proposition that the question of how a particular judgment, on a particular occasion, is to be justified, is independent of the question of how that judgment comes to be formed by the individual who forms it. This thesis, which I shall call j-independence, is deplored in certain (self-styled (...) as ‘naturalized’) schools of epistemology. Its denial is one of the two dogmas of this essay’s title. The other dogma is the metaphysical one on which it rests, which I shall call personalism. (shrink)
The difficulties of justifying a recipe for scientific inquiry that calls for sensory experience and logic as sole ingredients can hardly be overestimated. Resolving the riddles of induction, steadily mounting against empiricism since Hume, has come to seem like an exercise in making bricks without straw. To be forgiven the debt of solving these riddles, whether by feminists or others, would come as a great relief. But such relief, I shall argue, can come only at the very high price of (...) removing any capacity to evaluate inductive inference patterns. And if traditional philosophy cannot tolerate this loss, feminism should tolerate it even less. (shrink)
the voluntary actions of such beings cannot be covered by causal laws. Decision theorists, accepting the premise of this argument, appeal instead to noncausal laws predicated on principles of success—oriented action, and use these laws to produce substantive and testable predictions about large—scale human behavior. The primary directive of success-oriented action is maximization of some valuable quantity. Many economists and social scientists use the principles of decision theory to explain social and economic phenomena, while many political philosophers use them to (...) make recommendations on questions of.. (shrink)
Is a molecule-for-molecule duplicate D of some entity always a perfect duplicate of it? And in particular: is D a being with consciousness if its original is? These questions summarize a certain diagnostic tool used by metaphysicians, and prominently used in service of a form of dualism that is supposed to support an autonomous science of consciousness. This essay argues that this diagnostic tool is inapt when the exercise is performed as a pure thought experiment, for the sake of eliciting (...) data or judgment from intuition. The trouble is that intuition can render for a “duplicate” only what experience or other learning (perhaps via dogmatic methods) has instilled in the intuiter. But rather than disappoint the aspirations of an autonomous science, the argument of this essay will instead illuminate its better metaphysical supports. (shrink)
Utility theories—both Expected Utility (EU) and non-Expected Utility (non-EU) theories—offer numericalized representations of classical principles meant for the regulation of choice under conditions of risk—a type of formal representation that reduces the representation of risk to a single number. I shall refer to these as risk-numericalizing theories of decision. I shall argue that risk-numericalizing theories (referring both to the representations and to the underlying axioms that render numericalization possible) are not satisfactory answers to the question: “How do I take the (...) (best) means to my ends?” In other words, they are inadequate or incomplete as instrumental theories. They are inadequate because they are poor answers to the question of what it is for an option to be instrumental towards an end. To say it another way, they do not offer a sufficiently rich account of what it is for something to be a means (an instrument) toward an end. (shrink)
Human freedom resides primarily in exercise of that capacity that humans employ more abundantly than any other species on earth: the capacity for judgement. And in particular: that special judgement in relation to Self that we call aspiration. Freedom is not the absence of a field of (other) powers; instead, freedom shows up only against the reticulations of power impinging from without. For freedom worthy of the name must be construed as an exercise of power within an already-present field of (...) power. Thus, liberty and causal necessity are not obverses. (shrink)
What does “Black lives matter” say that “All lives matter” does not? In particular, why do we appreciate a kind of conflict between them? This essay is about the way that social identities work in human life. Appreciating the way that identity works will shed light on the way that “All lives matter” undermines the force of “Black lives matter.”.
We consider two versions of the view that the person of good sense has good sensibility and argue that at least one version of the view is correct. The version we defend is weaker than the version defended by contemporary Aristotelians; it can be consistently accepted even by those who find the contemporary Aristotelian version completely implausible. According to the version we defend, the person of good sense can be relied on to act soundly in part because, with the guidance (...) of a fine-tuned and wide-ranging ability to directly sense what is called for, she is regularly drawn, even pre-reflectively, to actions that are supported by reason. As we explain, this position does not imply that one's reasons exist independently of one's ends and that one must therefore detect reasons with the help of some value-detecting perceptual capacity that reveals some ends as valuable and others as not worth pursuing. (shrink)
In A Social Theory of Freedom, Mariam Thalos argues that the philosophical theory of human freedom should be a broadly social and political theory that employs tools of phenomenology, rather than a theory that locates itself in relation to canonical positions regarding the issue of determinism. Thalos rejects the premise that a theory of freedom is fundamentally a theory of the metaphysics of constraint and, instead, lays out a political conception of freedom that is closely aligned with questions of social (...) identity, self-development in contexts of intimate relationships, and social solidarity. Thalos argues that whether a person is free (in any context) depends upon a certain relationship of fit between that agent’s conception of themselves (both present and future), on the one hand, and the facts of their circumstances, on the other. Since relationships of fit are broadly logical, freedom is a logic—it is the logic of fit between one’s aspirations and one’s circumstances, what Thalos calls the logic of agency. The logic of agency, once fleshed out, becomes a broadly social and political theory that encompasses one’s self-conceptions as well as how these self-conceptions are generated, together with how they fit with the circumstances of one’s life. The theory of freedom proposed in this volume is fundamentally a social one. (shrink)
Introduction.Mariam Thalos & Henry Kyburg Jr - 2003 - In M. Thalos & H. Kyburg Jr (ed.), Probability is the Very Guide of Life: The Philosophical Uses of Probability.details
In this introduction we shall array a family of fundamental questions pertaining to probability, especially as it has been judged to bear upon the guidance of life. Applications and uses of probability theory need either to address some or all of these questions, or to tell us why they don’t. The essays assembled in this volume bring integrative perspectives on this family of questions. We asked the authors to describe in their own voices the intellectual histories of their contributions, so (...) as to shed further light upon the philosophical interest of their projects, and their particular integrative approaches, within the broader context we have sketched. The authors’ comments precede the essays. (shrink)
The apparent ineffectuality of quantum physics to reconcile its evolution rule with measurement phenomena has polarized the community of scholars working on the subject into, roughly, two sorts of camps. On the one side there are those who perceive the problem to be that of finding an interpretation of the conceptual structures of quantum theory whereon the two elements can be reconciled without having to revise the canonical understanding of either. On the other side are those who see measurement phenomena (...) as posing a challenge: the challenge of revising either the canonical equation or its canonical application in such a way that no conflict arises between it and laboratory data. ;This dissertation focuses on proposals of the latter sort. The centerpiece of the thesis is a reinterpretation of the Theory of Macrosystems of Daneri, Loinger and Prosperi. I show that this theory is most charitably interpreted as a proposal for reconciling measurement phenomena, not so much with Schrodinger dynamics, but rather with a constraint on the evolution map. It is, in other words, an account of how measurement phenomena is consistent with a certain set of dynamical behaviors. The focus on behaviors rather than equations and their solutions is characteristic of recent approaches to studying phenomena in classical contexts, both of phenomena to do with the approach to equilibrium and in situations involving far-from-equilibrium phenomena. The focus on behaviors is motivated by recognition that equations of motion for microscopic subcomponents of many-body systems do not wear on their countenances anything useful about the macroscopic behaviors of the systems they collectively describe--and are, in addition, virtually unsolvable. Therefore it profits an investigator to do without microscopic equations of motion in analyzing phenomena. The Theory of Macrosystems likewise "does without" equations of motion; I show that the account it gives is independent of closed-form rules of evolution. For this reason I call it an agnostic solution to the measurement problem--and show that this is an altogether different approach to the problem than any other so far conceived. (shrink)
The apparent ineffectuality of quantum physics to reconcile its evolution rule with measurement phenomena has polarized the community of scholars working on the subject into, roughly, two sorts of camps. On the one side there are those who perceive the problem to be that of finding an interpretation of the conceptual structures of quantum theory whereon the two elements can be reconciled without having to revise the canonical understanding of either. On the other side are those who see measurement phenomena (...) as posing a challenge: the challenge of revising either the canonical equation or its canonical application in such a way that no conflict arises between it and laboratory data. ;This dissertation focuses on proposals of the latter sort. The centerpiece of the thesis is a reinterpretation of the Theory of Macrosystems of Daneri, Loinger and Prosperi. I show that this theory is most charitably interpreted as a proposal for reconciling measurement phenomena, not so much with Schrodinger dynamics, but rather with a constraint on the evolution map. It is, in other words, an account of how measurement phenomena is consistent with a certain set of dynamical behaviors. The focus on behaviors rather than equations and their solutions is characteristic of recent approaches to studying phenomena in classical contexts, both of phenomena to do with the approach to equilibrium and in situations involving far-from-equilibrium phenomena. The focus on behaviors is motivated by recognition that equations of motion for microscopic subcomponents of many-body systems do not wear on their countenances anything useful about the macroscopic behaviors of the systems they collectively describe--and are, in addition, virtually unsolvable. Therefore it profits an investigator to do without microscopic equations of motion in analyzing phenomena. The Theory of Macrosystems likewise "does without" equations of motion; I show that the account it gives is independent of closed-form rules of evolution. For this reason I call it an agnostic solution to the measurement problem--and show that this is an altogether different approach to the problem than any other so far conceived. (shrink)
There is a certain tangle of philosophical questions around which much philosophy, especially in our time, has circled, to the point where now there is something that deserves to be called a holding pattern around these issues: What are causes? How do they compare with reasons? What is Reason, with a capital R? How does it participate in the production of intentions that lead to action, particularly in arenas rife with uncertainty? Where do formal systems of symbols come into all (...) of this? And how - if at all - can formal methods be harnessed to serve science and public policy, through guiding belief formation and decision-making? Henry Kyburg, Jr., has himself circled around this tangle of questions, at least once or twice. And so in his honor, and in the no-nonsense spirit of empiricism that marks his work as the work of a scientist, I'd like to sketch a way to cut through some of the Gordian knots at the center of this tangle. The theme will be that natural science has much of value to offer that has been willfully neglected by philosophers. This by itself is nowise surprising, as philosophy, particularly in the most exclusive parts of the academy, has suffered from an excessive transcendentalism - a theme to which I will return in this piece periodically. Now Kyburg has not been guilty of contributing to the causes for the decline of philosophy. He has, instead, been courageously working out the implications of his convictions regarding the virtues of vigorous formal systems, contributing to the advancement of empiricist methodologies, and generously supporting the causes of realism. In emulation of that courage, I offer this essay in the service of bold and vigorous formal systems, realism and - most emphatically - empiricism. (shrink)