We describe here an interdisciplinary lab science course for non-majors using the history of science as a curricular guide. Our experience with diverse instructors underscores the importance of the teachers and classroom dynamics, beyond the curriculum. Moreover, the institutional political context is central: are courses for non-majors valued and is support given to instructors to innovate? Two sample projects are profiled.
In spite of many claims by people who have had the kind of mystical experiences that I want to discuss, such experiences do not reveal any reality beyond the experience itself; nor does the experience itself constitute a cosmic principle such as the Godhead, Absolute, One or Chaos. These experiences are in the last analysis merely subjective experiences. I say ‘merely’ here only to deny that the experiences have any significance for the cosmologists; not to deny that the experience has (...) significant value for the experiencer. It may be that the experiences are the ultimate goal attainable by human beings. Their value does not depend on their being the ultimate truth. (shrink)
We construct a model [Formula: see text] of [Formula: see text] which lies between [Formula: see text] and [Formula: see text] for a Cohen real [Formula: see text] and does not have the form [Formula: see text] for any set [Formula: see text]. This is loosely based on the unwritten work done in a Bristol workshop about Woodin’s HOD Conjecture in 2011. The construction given here allows for a finer analysis of the needed assumptions on the ground models, thus (...) taking us one step closer to understanding models of [Formula: see text], and the HOD Conjecture and its relatives. This model also provides a positive answer to a question of Grigorieff about intermediate models of [Formula: see text], and we use it to show the failure of Kinna–Wagner Principles in [Formula: see text]. (shrink)
Economic theory is built on assumptions about human behavior—assumptions embodied in rational-choice theory. Underlying these assumptions are implicit notions about how we think and learn. These implicit notions are fundamentally important to social explanation. The very plausibility of the explanations that we develop out of rational-choice theory rests crucially on the accuracy of these notions about cognition and rationality. But there is a basic problem: There is often very little relationship between the assumptions that rational-choice theorists make and the way (...) that humans actually act and learn in everyday life. This has significant implications for economic theory and practice. It leads to bad theories and inadequate explanations; it produces bad predictions and, thus, supports ineffective social policies. (shrink)
What I wish to do in this paper is to look at a part of John Stuart Mill's ‘one very simple principle’ for determining the limits of state intervention. This principle is, you will remember, that ‘the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.’.
Etiquette and other merely formal normative standards like legality, honor, and rules of games are taken less seriously than they should be. While these standards are not intrinsically reason-providing in the way morality is often taken to be, they also play an important role in our practical lives: we collectively treat them as important for assessing the behavior of ourselves and others and as licensing particular forms of sanction for violations. This chapter develops a novel account of the normativity of (...) formal standards where the role they play in our practical lives explains a distinctive kind of reason to obey them. We have this kind of reason to be polite because etiquette is important to us. We also have this kind of reason to be moral because morality is important to us. This parallel suggests that the importance we assign to morality is insufficient to justify it being substantive. (shrink)
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of scientific theories. I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a detailed case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities with more (...) recent and popular heterodox logics such as naïve logics of truth. I use this discussion to motivate a constraint—logical partisanhood—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put. (shrink)
What makes a biological entity an individual? Jack Wilson shows that past philosophers have failed to explicate the conditions an entity must satisfy to be a living individual. He explores the reason for this failure and explains why we should limit ourselves to examples involving real organisms rather than thought experiments. This book explores and resolves paradoxes that arise when one applies past notions of individuality to biological examples beyond the conventional range and presents an analysis of identity and (...) persistence. The book's main purpose is to bring together two lines of research, theoretical biology and metaphysics, which have dealt with the same subject in isolation from one another. Wilson explains an alternative theory about biological individuality which solves problems which cannot be addressed by either field alone. He presents a more fine-grained vocabulary of individuation based on diverse kinds of living things, allowing him to clarify previously muddled disputes about individuality in biology. (shrink)
I argue that certain species of belief, such as mathematical, logical, and normative beliefs, are insulated from a form of Harman-style debunking argument whereas moral beliefs, the primary target of such arguments, are not. Harman-style arguments have been misunderstood as attempts to directly undermine our moral beliefs. They are rather best given as burden-shifting arguments, concluding that we need additional reasons to maintain our moral beliefs. If we understand them this way, then we can see why moral beliefs are vulnerable (...) to such arguments while mathematical, logical, and normative beliefs are not—the very construction of Harman-style skeptical arguments requires the truth of significant fragments of our mathematical, logical, and normative beliefs, but requires no such thing of our moral beliefs. Given this property, Harman-style skeptical arguments against logical, mathematical, and normative beliefs are self-effacing; doubting these beliefs on the basis of such arguments results in the loss of our reasons for doubt. But we can cleanly doubt the truth of morality. (shrink)
Most ?theories of consciousness? are based on vague speculations about the properties of conscious experience. We aim to provide a more solid basis for a science of consciousness. We argue that a theory of consciousness should provide an account of the very processes that allow us to acquire and use information about our own mental states ? the processes underlying introspection. This can be achieved through the construction of information processing models that can account for ?Type-C? processes. Type-C processes can (...) be specified experimentally by identifying paradigms in which awareness of the stimulus is necessary for an intentional action. The Shallice (1988b) framework is put forward as providing an initial account of Type-C processes, which can relate perceptual consciousness to consciously performed actions. Further, we suggest that this framework may be refined through the investigation of the functions of prefrontal cortex. The formulation of our approach requires us to consider fundamental conceptual and methodological issues associated with consciousness. The most significant of these issues concerns the scientific use of introspective evidence. We outline and justify a conservative methodological approach to the use of introspective evidence, with attention to the difficulties historically associated with its use in psychology. (shrink)
I distinguish two ways of developing anti-exceptionalist approaches to logical revision. The first emphasizes comparing the theoretical virtuousness of developed bodies of logical theories, such as classical and intuitionistic logic. I'll call this whole theory comparison. The second attempts local repairs to problematic bits of our logical theories, such as dropping excluded middle to deal with intuitions about vagueness. I'll call this the piecemeal approach. I then briefly discuss a problem I've developed elsewhere for comparisons of logical theories. Essentially, the (...) problem is that a pair of logics may each evaluate the alternative as superior to themselves, resulting in oscillation between logical options. The piecemeal approach offers a way out of this problem andthereby might seem a preferable to whole theory comparisons. I go on to show that reflective equilibrium, the best known piecemeal method, has deep problems of its own when applied to logic. (shrink)
Sometimes a fact can play a role in a grounding explanation, but the particular content of that fact make no difference to the explanation—any fact would do in its place. I call these facts vacuous grounds. I show that applying the distinction between-vacuous grounds allows us to give a principled solution to Kit Fine and Stephen Kramer’s paradox of ground. This paradox shows that on minimal assumptions about grounding and minimal assumptions about logic, we can show that grounding is reflexive, (...) contra the intuitive character of grounds. I argue that we should never have accepted that grounding is irreflexive in the first place; the intuitions that support the irreflexive intuition plausibly only require that grounding be non-vacuously irreflexive. Fine and Kramer’s paradox relies, essentially, on a case of vacuous grounding and is thus no problem for this account. (shrink)
Expressivists explain the expression relation which obtains between sincere moral assertion and the conative or affective attitude thereby expressed by appeal to the relation which obtains between sincere assertion and belief. In fact, they often explicitly take the relation between moral assertion and their favored conative or affective attitude to be exactly the same as the relation between assertion and the belief thereby expressed. If this is correct, then we can use the identity of the expression relation in the two (...) cases to test the expressivist account as a descriptive or hermeneutic account of moral discourse. I formulate one such test, drawing on a standard explanation of Moore's paradox. I show that if expressivism is correct as a descriptive account of moral discourse, then we should expect versions of Moore's paradox where we explicitly deny that we possess certain affective or conative attitudes. I then argue that the constructions that mirror Moore's paradox are not incoherent. It follows that expressivism is either incorrect as a hermeneutic account of moral discourse or that the expression relation which holds between sincere moral assertion and affective or conative attitudes is not identical to the relation which holds between sincere non-moral assertion and belief. A number of objections are canvassed and rejected. (shrink)
Why do promises give rise to reasons? I consider a quadruple of possibilities which I think will not work, then sketch the explanation of the normativity of promising I find more plausible—that it is constitutive of the practice of promising that promise-breaking implies liability for blame and that we take liability for blame to be a bad thing. This effects a reduction of the normativity of promising to conventionalism about liability together with instrumental normativity and desire-based reasons. This is important (...) for a number of reasons, but the most important reason is that this style of account can be extended to account for nearly all normativity—one notable exception being instrumental normativity itself. Success in the case of promises suggests a general reduction of normativity to conventions and instrumental normativity. But success in the cases of promises is already quite interesting and does not depend essentially the general claim about normativity. (shrink)
I investigate syntactic notions of theoretical equivalence between logical theories and a recent objection thereto. I show that this recent criticism of syntactic accounts, as extensionally inadequate, is unwarranted by developing an account which is plausibly extensionally adequate and more philosophically motivated. This is important for recent anti-exceptionalist treatments of logic since syntactic accounts require less theoretical baggage than semantic accounts.
I defend normative subjectivism against the charge that believing in it undermines the functional role of normative judgment. In particular, I defend it against the claim that believing that our reasons change from context to context is problematic for our use of normative judgments. To do so, I distinguish two senses of normative universality and normative reasons---evaluative universality and reasons and ontic universality and reasons. The former captures how even subjectivists can evaluate the actions of those subscribing to other conventions; (...) the latter explicates how their reasons differ from ours. I then show that four aspects of the functional role of normativity---evaluation of our and others actions and reasons, normative communication, hypothetical planning, and evaluating counternromative conditionals---at most requires our normative systems being evaluatively universal. Yet reasonable subjectivist positions need not deny evaluative universality. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
Raftopoulos’s most recent book argues, among other things, for the cognitive impenetrability of early vision. Before we can assess any such claims, we need to know what’s meant by “early vision” and by “cognitive penetration”. In this contribution to this book symposium, I explore several different things that one might mean – indeed, that Raftopoulos might mean – by these terms. I argue that whatever criterion we choose for delineating early vision, we need a single criterion, not a mishmash of (...) distinct criteria. And I argue against defining cognitive penetration in partly epistemological terms, although it is fine to offer epistemological considerations in defending some definitions as capturing something of independent interest. Finally, I raise some questions about how we are to understand the “directness” of certain putative cognitive influences on perception and about whether there’s a decent rationale for restricting directness in the way that Raftopoulos apparently does. (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding. There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificial intelligence and neuroscience. John (...) Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
I argue that we can and should extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw on this extension to discuss the logical status of both abstraction operators and abstraction principles.
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
Among medieval Aristotelians, William of Ockham defends a minimalist account of artifacts, assigning to statues and houses and beds a unity that is merely spatial or locational rather than metaphysical. Thus, in contrast to his predecessors, Thomas Aquinas and Duns Scotus, he denies that artifacts become such by means of an advening ‘artificial form’ or ‘form of the whole’ or any change that might tempt us to say that we are dealing with a new thing (res). Rather, he understands artifacts (...) as per accidens composites of parts that differ, but not so much that only divine power could unite them, as in the matter and form of a proper substance. For Ockham, artifacts are essentially rearrangements, via human agency, of already existing things, like the clay shaped by a sculptor into a statue or the stick and bristles and string one might fashion into a broom. Ockham does not think that a new thing is thereby created, although his emphasis on the contribution of human artisans seems to leave questions about the ontological status of their agency open. In any case, there are no such things as natural statues, any more than substances created by human artifice. (shrink)
Ascriptions of objectivity carry significant weight. But they can also cause confusion because wildly different ideas of what it means to be objective are common. Faced with this, some philosophers have argued that objectivity should be eliminated. I will argue, against one such position, that objectivity can be useful even though it is plural. I will then propose a contextualist approach for dealing with objectivity as a way of rescuing what is useful about objectivity while acknowledging its plurality.
Philosophical arguments usually are and nearly always should be abductive. Across many areas, philosophers are starting to recognize that often the best we can do in theorizing some phenomena is put forward our best overall account of it, warts and all. This is especially true in esoteric areas like logic, aesthetics, mathematics, and morality where the data to be explained is often based in our stubborn intuitions. -/- While this methodological shift is welcome, it's not without problems. Abductive arguments involve (...) significant theoretical resources which themselves can be part of what's being disputed. This means that we will sometimes find otherwise good arguments which suggest their own grounds are problematic. In particular, sometimes revising our beliefs on the basis of such an argument can undermine the very justification we used in that argument. -/- This feature, which I'll call self-effacingness, occurs most dramatically in arguments against our standing views on the esoteric subject matters mentioned above: logic, mathematics, aesthetics, and morality. This is because these subject matters all play a role in how we reason abductively. This isn't an idle fact; we can resist some challenges to our standing beliefs about these subject matters exactly because the challenges are self-effacing. The self-effacing character of certain arguments is thus both a benefit and limitation of the abductive turn and deserves serious attention. I aim to give it the attention it deserves. (shrink)
The main claim of this paper is that Andy Clark's most influential argument for ‘the extended mind thesis’ (EM henceforth) fails. Clark's argument for EM assumes that a certain form of common-sense functionalism is true. I argue, contra Clark, that the assumed brand of common-sense functionalism does not imply EM. Clark's argument also relies on an unspoken, undefended and optional assumption about the nature of mental kinds—an assumption denied by the very common-sense functionalists on whom Clark's argument draws. I also (...) critique Mark Sprevak's reductio of Clark's argument. Sprevak contends that Clark's argument does not merely entail EM; it entails an extended mind thesis so strong as to be absurd. He goes on to claim that Clark's argument should properly be viewed as a reductio of the very common-sense functionalism on which it depends. Sprevak's argument shares the flaw that afflicts Clark's argument, or so I claim. (shrink)
The New Evil Demon Problem is supposed to show that straightforward versions of reliabilism are false: reliability is not necessary for justification after all. I argue that it does no such thing. The reliabilist can count a number of beliefs as justified even in demon worlds, others as unjustified but having positive epistemic status nonetheless. The remaining beliefs---primarily perceptual beliefs---are not, on further reflection, intuitively justified after all. The reliabilist is right to count these beliefs as unjustified in demon worlds, (...) and it is a challenge for the internalist to be able to do so as well. (shrink)
Open-mindedness is an under-explored topic in virtue epistemology, despite its assumed importance for the field. Questions about it abound and need to be answered. For example, what sort of intellectual activities are central to it? Can one be open-minded about one's firmly held beliefs? Why should we strive to be open-minded? This paper aims to shed light on these and other pertinent issues. In particular, it proposes a view that construes open-mindedness as engagement, that is, a willingness to entertain novel (...) ideas in one's cognitive space and to accord them serious consideration. (shrink)
Strong reciprocity has recently been subject to heated debate. In this debate, the “West camp” :231–262, 2011), which is critical of the case for SR, and the “Laland camp” :1512–1516, 2011, Biol Philos 28:719–745, 2013), which is sympathetic to the case of SR, seem to take diametrically opposed positions. The West camp criticizes advocates of SR for conflating proximate and ultimate causation. SR is said to be a proximate mechanism that is put forward by its advocates as an ultimate explanation (...) of human cooperation. The West camp thus accuses advocates of SR for not heeding Mayr’s original distinction between ultimate and proximate causation. The Laland camp praises advocates of SR for revising Mayr’s distinction. Advocates of SR are said to replace Mayr’s uni-directional view on the relation between ultimate and proximate causes by the bi-directional one of reciprocal causation. The paper argues that both the West camp and the Laland camp misrepresent what advocates of SR are up to. The West camp is right that SR is a proximate cause of human cooperation. But rather than putting forward SR as an ultimate explanation, as the West camp argues, advocates of SR believe that SR itself is in need of ultimate explanation. Advocates of SR tend to take gene-culture co-evolutionary theory as the correct meta-theoretical framework for advancing ultimate explanations of SR. Appearances notwithstanding, gene-culture coevolutionary theory does not imply Laland et al.’s notion of reciprocal causation. “Reciprocal causation” suggests that proximate and ultimate causes interact simultaneously, while advocates of SR assume that they interact sequentially. I end by arguing that the best way to understand the debate is by disambiguating Mayr’s ultimate-proximate distinction. I propose to reserve “ultimate” and “proximate” for different sorts of explanations, and to use other terms for distinguishing different kinds of causes and different parts of the total causal chain producing behavior. (shrink)
Professor Jack Goody builds on his own previous work to extend further his highly influential critique of what he sees as the pervasive eurocentric or occidentalist biases of so much western historical writing. Goody also examines the consequent 'theft' by the West of the achievements of other cultures in the invention of (notably) democracy, capitalism, individualism, and love. The Theft of History discusses a number of theorists in detail, including Marx, Weber and Norbert Elias, and engages with critical admiration (...) western historians like Fernand Braudel, Moses Finlay and Perry Anderson. Major questions of method are raised, and Goody proposes a new comparative methodology for cross-cultural analysis, one that gives a much more sophisticated basis for assessing divergent historical outcomes, and replaces outmoded simple differences between East and West. The Theft of History will be read by an unusually wide audience of historians, anthropologists and social theorists. (shrink)
Biology lacks a central organism concept that unambiguously marks the distinction between organism and non-organism because the most important questions about organisms do not depend on this concept. I argue that the two main ways to discover useful biological generalizations about multicellular organization--the study of homology within multicellular lineages and of convergent evolution across lineages in which multicellularity has been independently established--do not require what would have to be a stipulative sharpening of an organism concept.
I argue that in order to apply the most common type of criteria for logicality, invariance criteria, to natural language, we need to consider both invariance of content—modeled by functions from contexts into extensions—and invariance of character—modeled, à la Kaplan, by functions from contexts of use into contents. Logical expressionsshould be invariant in both senses. If we do not require this, then old objections due to Timothy McCarthy and William Hanson, suitably modified, demonstrate that content invariant expressions can display intuitive (...) marks of non-logicality. If we do require this, we neatly avoid these objections while also managing to demonstrate desirable connections of logicality to necessity. The resulting view is more adequate as a demarcation of the logical expressions of natural language. (shrink)
I respond to an interesting objection to my 2014 argument against hermeneutic expressivism. I argue that even though Toppinen has identified an intriguing route for the expressivist to tread, the plausible developments of it would not fall to my argument anyways---as they do not make direct use of the parity thesis which claims that expression works the same way in the case of conative and cognitive attitudes. I close by sketching a few other problems plaguing such views.
In a series of joint papers, Teppo Felin and Nicolai J. Foss recently launched a microfoundations project in the field of strategic management. Felin and Foss observe that extant explanations in strategic management are predominantly collectivist or macro. Routines and organizational capabilities, which are supposed to be properties of firms, loom large in the field of strategic management. Routines figure as explanantia in explanations of firm behavior and firm performance, for example. Felin and Foss plead for a replacement of such (...) macro-explanations by micro-explanations. Such a replacement is needed, Felin and Foss argue, because macro-explanations are necessarily incomplete: they miss out on crucial links in the causal chain that connect macro phenomena with each other. I argue that this argument is flawed. It is based on a doubtful if not outright incorrect understanding and use of Coleman’s diagram. In a sense to be explained below, only if Coleman’s diagram is squared it can accurately account for the relations between individual action and interaction, routines and firm behavior and firm performance. Once Coleman’s diagram is squared, one can see why and how macro-explanations need not miss out on any link in the causal chains that connect macro phenomena. Micro-analyses are still needed, not to highlight and specify causal links that macro-explanations miss out on, but to check whether the many properties that are ascribed to routines in macro-explanations of firm behavior are warranted. (shrink)
I show that the model-theoretic meaning that can be read off the natural deduction rules for disjunction fails to have certain desirable properties. I use this result to argue against a modest form of inferentialism which uses natural deduction rules to fix model-theoretic truth-conditions for logical connectives.
Cognitive science has wholeheartedly embraced functional brain imaging, but introspective data are still eschewed to the extent that it runs against standard practice to engage in the systematic collection of introspective reports. However, in the case of executive processes associated with prefrontal cortex, imaging has made limited progress, whereas introspective methods have considerable unfulfilled potential. We argue for a re-evaluation of the standard ‘cognitive mapping’ paradigm, emphasizing the use of retrospective reports alongside behavioural and brain imaging techniques. Using all three (...) sources of evidence can compensate for their individual limitations, and so triangulate on higher cognitive processes. (shrink)
Important lessons must be learned from the Bristol inquiryI was disturbed when I first read the following in an October 1998 issue of the Medical Journal of Australia."In June 1998, the Professional Conduct Committee of the General Medical Council of the United Kingdom concluded the longest running case it has considered [this] century. Three medical practitioners were accused of serious professional misconduct relating to 29 deaths in 53 paediatric cardiac operations undertaken at the Bristol Royal Infirmary between 1988 (...) and 1995. All three denied the charges but, after 65 days of evidence over eight months , all three were found guilty." "The doctors concerned are Mr James Wisheart, a paediatric and adult cardiac surgeon and the former Medical Director of the United Bristol Healthcare Trust; Mr Janardan Dhasmana, paediatric and adult cardiac surgeon; and Dr John Roylance, a former radiologist, and Chief Executive of the Trust." "The central allegations were that the Chief Executive and the Medical Director allowed to be carried out, and the two paediatric cardiac surgeons carried out, operations on children knowing that the mortality rates for these operations, in the hands of these surgeons, were high. Furthermore, the surgeons were accused of not communicating to the parents the correct risk of death for these operations in their hands.1"Mr Wisheart and Dr Roylance were subsequently struck off the medical register. Mr Dhasmana was disqualified from practising paediatric cardiac surgery for three years. The doctors required police protection as they left the General Medical Council hearing as furious parents shouted “murderer” and “bastard”.2Why did this occur?Dr Stephen Bolsin has presented a …. (shrink)
One of the more important and under-thematized philosophical disputes in contemporary European philosophy pertains to the significance that is given to the inter-related phenomena of habituality, skilful coping, and learning. This paper examines this dispute by focusing on the work of the Merleau-Ponty and Heidegger-inspired phenomenologist Hubert Dreyfus, and contrasting his analyses with those of Gilles Deleuze, particularly in Difference and Repetition. Both Deleuze and Dreyfus pay a lot of attention to learning and coping, while arriving at distinct conclusions about (...) these phenomena with a quite different ethico-political force. By getting to the bottom of the former, my hope is to problematize aspects of the latter in both philosophers' work. In Deleuze's case, it will be argued that he adopts a problematic position on learning that is aptly termed 'empirico-romanticism'. While I will agree with the general thrust of Dreyfus' foregrounding of habit and skilful coping, even in the political realm, it will also be argued that there are some risks associated with his view, notably of devolving into a conservative communitarianism. (shrink)
Bioregionalists have championed the utility of the concept of the watershed as an organizing framework for thought and action directed to understanding and implementing appropriate and respectful human interaction with particular pieces of land. In a creative analogue to the watershed, permaculturist Arthur Getz has recently introduced the term “foodshed” to facilitate critical thought about where our food is coming from and how it is getting to us. We find the “foodshed” to be a particularly rich and evocative metaphor; but (...) it is much more than metaphor. Like its analogue the watershed, the foodshed can serve us as a conceptual and methodological unit of analysis that provides a frame for action as well as thought. Food comes to most of us now through a global food system that is destructive of both natural and social communities. In this article we explore a variety of routes for the conceptual and practical elaboration of the foodshed. While corporations that are the principal beneficiaries of a global food system now dominate the production, processing, distribution, and consumption of food, alternatives are emerging that together could form the basis for foodshed development. Just as many farmers are recognizing the social and environmental advantages to sustainable agriculture, so are many consumers coming to appreciate the benefits of fresh and sustainably produced food. Such producers and consumers are being linked through such innovative arrangements as community supported agriculture and farmers markets. Alternative producers, alternative consumers, and alternative small entrepreneurs are rediscovering community and finding common ground in municipal and community food councils. Recognition of one's residence within a foodshed can confer a sense of connection and responsibility to a particular locality. The foodshed can provide a place for us to ground ourselves in the biological and social realities of living on the land and from the land in a place that we can call home, a place to which we are or can become native. (shrink)
In _Phenomenology, Naturalism and Empirical Science_, Jack Reynolds takes the controversial position that phenomenology and naturalism are compatible, and develops a hybrid account of phenomenology and empirical science. Though phenomenology and naturalism are typically understood as philosophically opposed to one another, Reynolds argues that this resistance is based on an understanding of transcendental phenomenology that is ultimately untenable and in need of updating. Phenomenology, as Reynolds reorients it, is compatible with liberal naturalism, as well as with weak forms of (...) methodological naturalism. Chapters explore areas where scientific and phenomenological work overlap and sometimes conflict, contesting standard ways of understanding the relationship between phenomenological philosophy and empirical science. The book outlines the significance of the first-person perspective characteristic of phenomenology—both epistemically and ontologically—while according due respect to the relevant empirical sciences. This book makes a significant contribution to one of the central issues in phenomenology and argues for phenomenology’s ongoing importance for the future of philosophy. (shrink)