Philosophers of science have given considerable attention to the logic of completed scientific systems. In this 1958 book, Professor Hanson turns to an equally important but comparatively neglected subject, the philosophical aspects of research and discovery. He shows that there is a logical pattern in finding theories as much as in using established theories to make deductions and predictions, and he sets out the features of this pattern with the help of striking examples in the history of science.
Soren Kierkegaard's Fear and Trembling is one of the most widely read works of Continental philosophy and the philosophy of religion. While several commentaries and critical editions exist, Jeffrey Hanson offers a distinctive approach to this crucial text. Hanson gives equal weight and attention to all three of Kierkegaard’s "problems," dealing with Fear and Trembling as part of the entire corpus of Kierkegaard's production and putting all parts into relation with each other. Additionally, he offers a distinctive analysis (...) of the Abraham story and other biblical texts, giving particular attention to questions of poetics, language, and philosophy, especially as each relates to the aesthetic, the ethical, and the religious. Presented in a thoughtful, well-informed, and fresh manner, Hanson’s claims are original and edifying. This new reading of Kierkegaard will stimulate fruitful dialogue on well-traveled philosophical ground. (shrink)
Does general validity or real world validity better represent the intuitive notion of logical truth for sentential modal languages with an actuality connective? In (Philosophical Studies 130:436–459, 2006) I argued in favor of general validity, and I criticized the arguments of Zalta (Journal of Philosophy 85:57–74, 1988) for real world validity. But in Nelson and Zalta (Philosophical Studies 157:153–162, 2012) Michael Nelson and Edward Zalta criticize my arguments and claim to have established the superiority of real world validity. Section 1 (...) of the present paper introduces the problem and sets out the basic issues. In Sect. 2 I consider three of Nelson and Zalta’s arguments and find all of them deficient. In Sect. 3 I note that Nelson and Zalta direct much of their criticism at a phrase (‘true at a world from the point of view of some distinct world as actual’) I used only inessentially in Hanson (Philosophical Studies 130:436–459, 2006), and that their account of the philosophical foundations of modal semantics leaves them ill equipped to account for the plausibility of modal logics weaker than S5. Along the way I make several general suggestions for ways in which philosophical discussions of logical matters–especially, but not limited to, discussions of truth and logical truth for languages containing modal and indexical terms–might be facilitated and made more productive. (shrink)
The field of neuroimaging has reached a watershed. Brain imaging research has been the source of many advances in cognitive neuroscience and cognitive science over the last decade, but recent critiques and emerging trends are raising foundational issues of methodology, measurement, and theory. Indeed, concerns over interpretation of brain maps have created serious controversies in social neuroscience, and, more important, point to a larger set of issues that lie at the heart of the entire brain mapping enterprise. In this volume, (...) leading scholars -- neuroimagers and philosophers of mind -- reexamine these central issues and explore current controversies that have arisen in cognitive science, cognitive neuroscience, computer science, and signal processing. The contributors address both statistical and dynamical analysis and modeling of neuroimaging data and interpretation, discussing localization, modularity, and neuroimagers' tacit assumptions about how these two phenomena are related; controversies over correlation of fMRI data and social attributions ; and the standard inferential design approach in neuroimaging. Finally, the contributors take a more philosophical perspective, considering the nature of measurement in brain imaging, and offer a framework for novel neuroimaging data structures. Contributors: William Bechtel, Bharat Biswal, Matthew Brett, Martin Bunzl, Max Coltheart, Karl J. Friston, Joy J. Geng, Clark Glymour, Kalanit Grill-Spector, Stephen José Hanson, Trevor Harley, Gilbert Harman, James V. Haxby, Rik N. Henson, Nancy Kanwisher, Colin Klein, Richard Loosemore, Sébastien Meriaux, Chris Mole, Jeanette A. Mumford, Russell A. Poldrack, Jean-Baptiste Poline, Richard C. Richardson, Alexis Roche, Adina L. Roskies, Pia Rotshtein, Rebecca Saxe, Philipp Sterzer, Bertrand Thirion, Edward Vul The hardcover edition does not include a dust jacket. (shrink)
Originally published in 1963, The Concept of the Positron forms a detailed analysis of quantum theory. Whilst it is not as well known as Professor Hanson's previous book, Patterns of Discovery, the text has many interesting aspects. In many ways it goes further than Hanson's earlier work in approaching the problems of theory competition and the rationality of science, topics that have since become central to the philosophy of science. It is also notable for a rigorous and forthright (...) defence of the Copenhagen Interpretation. Taken together, the ideas presented in this book constitute a first-rate achievement in the history and philosophy of science. This paperback reissue comes with a new preface from Matthew Lund, Assistant Professor, Faculty of Philosophy and Religious Studies at Rowan University. (shrink)
Arthur Diamond comments that "it is not clear how a donor distributes money through Hanson's market". Let me try again to be clear. Imagine David Levy were to seek funding for the regression he suggests in his comments, on the relative impact of sports versus science spending on aggregate productivity. Consider what might happen under three different funding institutions.
Connectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and “simple” homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of (...) memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explore systematically the complex interaction between learning and representation, as we try to demonstrate through the analysis of several large networks. (shrink)
In the first section, I consider what several logicians say informally about the notion of logical consequence. There is significant variation among these accounts, they are sometimes poorly explained, and some of them are clearly at odds with the usual technical definition. In the second section, I first argue that a certain kind of informal account—one that includes elements of necessity, generality, and apriority—is approximately correct. Next I refine this account and consider several important questions about it, including the appropriate (...) characterization of necessity, the criterion for selecting logical constants, and the exact role of apriority. I argue, among other things, that there is no need to recognize a special logical sense of necessity and that the selection of terms to serve as logical constants is ultimately a pragmatic matter. In the third section, I consider whether the informal account I have presented and defended is adequately represented by the usual technical definition. I show that it is, and provably so, for certain limited ways of selecting logical constants. In the general case, however, there seems to be no way to be sure that the technical and informal accounts coincide. (shrink)
Reverse inference in cognitive neuropsychology has been characterized as inference to ‘psychological processes’ from ‘patterns of activation’ revealed by functional magnetic resonance or other scanning techniques. Several arguments have been provided against the possibility. Focusing on Machery’s presentation, we attempt to clarify the issues, rebut the impossibility arguments, and propose and illustrate a strategy for reverse inference. 1 The Problem of Reverse Inference in Cognitive Neuropsychology2 The Arguments2.1 The anti-Bayesian argument3 Patterns of Activation4 Reverse Inference Practiced5 Seek and Ye Shall (...) Find, Maybe6 Conclusion. (shrink)
It has become increasingly common for philosophers to make use of the concept of artistic value, and, further, to distinguish artistic value from aesthetic value. In a recent paper, ‘The Myth of (Non-Aesthetic) Artistic Value’, Dominic Lopes takes issue with this, presenting a kind of corrective to current philosophical practice regarding the use of the concept of artistic value. Here I am concerned to defend current practice against Lopes's attack. I argue that there is some unclarity as to what aspect (...) of this practice Lopes is objecting to, and I distinguish three kinds of objection that he could be read as making. I argue that none of these is adequately supported by Lopes's arguments, and that the corresponding three aspects of current philosophical practice are on firmer footing than Lopes's paper suggests. A new, plausible characterisation of artistic value will emerge from this discussion. (shrink)
Policy disputes arise at all scales of governance: in clubs, non-profits, firms, nations, and alliances of nations. Both the means and ends of policy are disputed. While many, perhaps most, policy disputes arise from conflicting ends, important disputes also arise from differing beliefs on how to achieve shared ends. In fact, according to many experts in economics and development, governments often choose policies that are “inefficient” in the sense that most everyone could expect to gain from other feasible policies. Many (...) other kinds of experts also see existing policies as often clearly inferior to known alternatives. (shrink)
The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...) idea that actions are produced only by human individuals. This essay compares two views of responsibility: moral individualism (the ethical twin of methodological individualism), and joint responsibility (associated with extended agency theory). It develops a view of what joint responsibility might look like, and considers the advantages it might bring relative to moral individualism as well as the objections that are sure to be raised against it. (shrink)
What if we someday learn how to model small brain units, and so can "upload" ourselves into new computer brains? What if this happens before we learn how to make human-level artificial intelligences? The result could be a sharp transition to an upload-dominated world, with many dramatic consequences. In particular, fast and cheap replication may once again make Darwinian evolution of human values a powerful force in human history. With evolved values, most uploads would value life even when life is (...) hard or short, uploads would reproduce quickly, and wages would fall. But total wealth should rise, so we could all do better by accepting uploads, or at worse taxing them, rather than trying to delay or segregate them. (shrink)
Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?
This article argues that teaching medical and nursing students health care ethics in an interdisciplinary setting is beneficial for them. Doing so produces an education that is theoretically more consistent with the goals of health care ethics, can help to reduce moral stress and burnout, and can improve patient care. Based on a literature review, theoretical arguments and individual observation, this article will show that the benefits of interdisciplinary education, specifically in ethics, outweigh the difficulties many schools may have in (...) developing such courses. (shrink)
Attempts to model interstellar colonization may seem hopelessly compromised by uncertainties regarding the technologies and preferences of advanced civilizations. If light speed limits travel speeds, however, then a selection effect may eventually determine frontier behavior. Making weak assumptions about colonization technology, we use this selection effect to predict colonists’ behavior, including which oases they colonize, how long they stay there, how many seeds they then launch, how fast and far those seeds fly, and how behavior changes with increasing congestion. This (...) colonization model explains several astrophysical puzzles, predicting lone oases like ours, amid large quiet regions with vast unused resources. (shrink)
Although research into fair and alternative trade networks has increased significantly in recent years, very little synthesis of the literature has occurred thus far, especially for social considerations such as gender, health, labor, and equity. We draw on insights from critical theorists to reflect on the current state of fair and alternative trade, draw out contradictions from within the existing research, and suggest actions to help the emancipatory potential of the movement. Using a systematic scoping review methodology, this paper reviews (...) 129 articles and reports that discuss the social dimensions of fair and alternative trade experienced by Southern agricultural producers and workers. The results highlight gender, health, and labor dimensions of fair and alternative trade systems and suggest that diverse groups of producers and workers may be experiencing related inequities. By bringing together issues that are often only tangentially discussed in individual studies, the review gives rise to a picture that suggests that research on these issues is both needed and emerging. We end with a summary of key findings and considerations for future research and action. (shrink)
A simple exogenous growth model gives conservative estimates of the economic implications of machine intelligence. Machines complement human labor when they become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At ﬁrst, expensive hardware and software does only the few jobs where computers have the strongest advantage over humans. Eventually, computers do most jobs. At ﬁrst, complementary eﬀects dominate, and human wages rise with computer productivity. But eventually substitution can (...) dominate, making wages fall as fast as computer prices now do. An intelligence population explosion makes per-intelligence consumption fall this fast, while economic growth rates rise by an order of magnitude or more. These results are robust to automating incrementally, and to distinguishing hardware, software, and human capital from other forms of capital. (shrink)
The pace of scientific progress may be hindered by the tendency of our academic institutions to reward being popular rather than being right. A market-based alternative, where scientists can more formally 'stake their reputation', is presented here. It offers clear incentives to be careful and honest while contributing to a visible, self-consistent consensus on controversial scientific questions. In addition, it allows patrons to choose questions to be researched without choosing people or methods. The bulk of this paper is spent in (...) examining potential problems with the proposed approach. After this examination, the idea still seems to be plausible and worthy of further study. (shrink)
If you might be living in a simulation then all else equal you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.
Engineers’ love of technology often gets in the way of their being useful. Consider Post-it Notes or, better yet, plain paper notepads. These probably seemed like trivial ideas, but they turned out to be terribly useful. Why? Because the marvel that is the human brain has a horrible short-term memory, which means that dumb-as-dirt memory aids can make people substantially smarter.
In Everett's many worlds interpretation, quantum measurements are considered to be decoherence events. If so, then inexact decoherence may allow large worlds to mangle the memory of observers in small worlds, creating a cutoff in observable world size. Smaller world are mangled and so not observed. If this cutoff is much closer to the median measure size than to the median world size, the distribution of outcomes seen in unmangled worlds follows the Born rule. Thus deviations from exact decoherence can (...) allow the Born rule to be derived via world counting, with a finite number of worlds and no new fundamental physics. (shrink)
Human behavior regarding medicine seems strange; assumptions and models that seem workable in other areas seem less so in medicine. Perhaps we need to rethink the basics. Toward this end, I have collected many puzzling stylized facts about behavior regarding medicine, and have sought a small number of simple assumptions which might together account for as many puzzles as possible.
A world product time series covering two million years is well ﬁt by either a sum of four exponentials, or a constant elasticity of substitution (CES) combination of three exponential growth modes: “hunting,” “farming,” and “industry.” The CES parameters suggest that farming substituted for hunting, while industry complemented farming, making the industrial revolution a smoother transition. Each mode grew world product by a factor of a few hundred, and grew a hundred times faster than its predecessor. This weakly suggests that (...) within the next century a new mode might appear with a doubling time measured in days, not years. (shrink)
Intuition is surely a theme of singular importance to phenomenology, and Henry writes sometimes as if intuition should receive extensive attention from phenomenologists. However, he devotes relatively little attention to the problem of intuition himself. Instead he off ers a complex critique of intuition and the central place it enjoys in phenomenological speculation. This article reconstructs Henry’s critique and raises some questions for his counterintuitive theory of intuition. While Henry cannot make a place for the traditional sort of intuition given (...) his commitment to the primacy of life as the natural and spontaneous habitation of consciousness, an abode entirely outside the world, there nevertheless with some modification to Henry’s thinking could be a role for intuition to play in discerning the traces of life in the world. (shrink)
Given common priors, no agent can publicly estimate a non-zero sign for the difference between his estimate and another agent’s future estimate. Thus rational agents cannot publicly anticipate the direction in which other agents will disagree with them. 2002 Elsevier Science B.V. All rights reserved.
In standard belief models, priors are always common knowledge. This prevents such models from representing agents’ probabilistic beliefs about the origins of their priors. By embedding standard models in a larger standard model, however, pre-priors can describe such beliefs. When an agent’s prior and pre-prior are mutually consistent, he must believe that his prior would only have been diﬀerent in situations where relevant event chances were diﬀerent, but that variations in other agents’ priors are otherwise completely unrelated to which events (...) are how likely. Due to this, Bayesians who agree enough about the origins of their priors must have the same priors. (shrink)
In practice, scoring rules elicit good probability estimates from individuals, while betting markets elicit good consensus estimates from groups. Market scoring rules combine these features, eliciting estimates from individuals or groups, with groups costing no more than individuals. Regarding a bet on one event given another event, only logarithmic versions preserve the probability of the given event. Logarithmic versions also preserve the conditional probabilities of other events, and so preserve conditional independence relations. Given logarithmic rules that elicit relative probabilities of (...) base event pairs, it costs no more to elicit estimates on all combinations of these base events. (shrink)
Humans have slowly built more productive societies by slowly acquiring various kinds of capital, and by carefully matching them to each other. Because disruptions can disturb this careful matching, and discourage social coordination, large disruptions can cause a “social collapse,” i.e., a reduction in productivity out of proportion to the disruption. For many types of disasters, severity seems to follow a power law distribution. For some of types, such as wars and earthquakes, most of the expected harm is predicted to (...) occur in extreme events, which kill most people on Earth. So if we are willing to worry about any war or earthquake, we should worry especially about extreme versions. If individuals varied little in their resistance to such disruptions, events a little stronger than extreme ones would eliminate humanity, and our only hope would be to prevent such events. If individuals vary a lot in their resistance, however, then it may pay to increase the variance in such resistance, such as by creating special sanctuaries from which the few remaining humans could rebuild society. (shrink)
Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be (...) the ground level out of which higher-order categories are constructed; nets are one possible candidate for the mechanism that learns the sensorimotor invariants that connect arbitrary names (elementary symbols?) to the nonarbitrary shapes of objects. This paper examines how and why such compression/expansion effects occur in neural nets. (shrink)