I attempt to get as clear as possible on the chain of reasoning by which irreversible macrodynamics is derivable from time-reversible microphysics, and in particular to clarify just what kinds of assumptions about the initial state of the universe, and about the nature of the microdynamics, are needed in these derivations. I conclude that while a “PastHypothesis” about the early Universe does seem necessary to carry out such derivations, that Hypothesis is not correctly understood as a (...) constraint on the early Universe’s entropy. (shrink)
The PastHypothesis is the claim that the Boltzmann entropy of the universe was extremely low when the universe began. Can we make sense of this claim when *classical* gravitation is included in the system? I first show that the standard rationale for not worrying about gravity is too quick. If the paper does nothing else, my hope is that it gets the problems induced by gravity the attention they deserve in the foundations of physics. I then try (...) to make plausible a very weak claim: that there is a well-defined Boltzmann entropy that *can* increase in *some* interesting self-gravitating systems. More work is needed before we can say whether this claim answers the threat to the standard explanation of entropy increase. (shrink)
The pasthypothesis is that the entropy of the universe was very low in the distant past. It is put forward to explain the entropic arrow of time but it has been suggested (e.g. [Penrose, R. (1989a). The emperor’s new mind. London:Vintage Books; Penrose, R. (1989b). Annals of the New York Academy of Sciences, 571, 249–264; Price, H. (1995). In S. F. Savitt (Ed.), Times’s arrows today. Cambridge: Cambridge University Press; Price, H. (1996). Time’s arrow and Archimedes’ (...) point. Oxford: Oxford University Press; Price, H. (2004). In C. Hitchcock (Ed.), Contemporary debates in philosophy of science. Oxford: Blackwell]) that it is itself in need of explanation. It has also been suggested that cosmic inflation could provide the explanation, but Price (2004) raises a serious objection to this suggestion, which has otherwise received very little attention in the philosophical literature. Price points out that the standard inflationary explanation involves a double standard: although the evolution of the universe described by the inflationary model seems natural from the standard temporal perspective it looks highly unnatural from the reversed temporal perspective. The main purpose of this paper is to propose a novel form of the inflationary explanation that avoids this objection. It is argued that the inflationary model would not involve a double standard (but would still explain the pasthypothesis) if we construct the model with a global “boundary” condition instead of a conventional boundary condition: if we assume that the universe is as generic as possible overall, rather than as generic as possible at some given point (e.g. the Big Bang) as is assumed in the standard inflationary model. This novel form of the inflationary explanation is then compared with Price’s 1996 preferred explanation, a version of the so-called “Weyl hypothesis”. (shrink)
In his recent book, Time and Chance, David Albert claims that by positing that there is a uniform probability distribution defined, on the standard measure, over the space of microscopic states that are compatible with both the current macrocondition of the world, and with what he calls the “pasthypothesis”, we can explain the time asymmetry of all of the thermodynamic behavior in the world. The principal purpose of this paper is to dispute this claim. I argue that (...) Albert's proposal fails in his stated goal—to show how to use the time‐reversible dynamics of Newtonian physics to “underwrite the actual content of our thermodynamic experience” (Albert 2000, 159). Albert's proposal can satisfactorily explain why the overall entropy of the universe as a whole is increasing, but it does not and cannot explain the increasing entropy of relatively small, relatively short‐lived systems in energetic isolation without making use of a principle that leads to reversibility objections. (shrink)
Why is our knowledge of the past so much more ‘expansive’ (to pick a suitably vague term) than our knowledge of the future, and what is the best way to capture the difference(s) (i.e., in what sense is knowledge of the past more ‘expansive’)? One could reasonably approach these questions by giving necessary conditions for different kinds of knowledge, and showing how some were satisfied by certain propositions about the past, and not by corresponding propositions about the (...) future. I take it that such is the approach of Chapter 6 of Time and Chance (T&C). Here’s another such a proposal, similar to that of, but significantly different from T&C; my purpose in this section is to highlight the differences, by showing how this account fails. (shrink)
In recent work on the foundations of statistical mechanics and the arrow of time, Barry Loewer and David Albert have developed a view that defends both a best system account of laws and a physicalist fundamentalism. I argue that there is a tension between their account of laws, which emphasizes the pragmatic element in assessing the relative strength of different deductive systems, and their reductivism or funda- mentalism. If we take the pragmatic dimension in their account seriously, then the laws (...) of the special sciences should be part of our best explanatory system of the world, as well. (shrink)
I defend what may loosely be called an eliminativist account of causation by showing how several of the main features of causation, namely asymmetry, transitivity, and necessitation (or sometimes probability-raising), arise from the combination of fundamental dynamical laws and a special constraint on the macroscopic structure of matter in the past. At the microscopic level, the causal features of necessitation and transitivity are grounded, but not the asymmetry. At the coarse-grained level of the macroscopic physics, the causal asymmetry is (...) grounded, but not the necessitation or transitivity. Thus, at no single level of description does the physics justify the conditions that are taken to be constitutive of causation. Nevertheless, if we mix our reasoning about the microscopic and macroscopic descriptions, the structure provided by the dynamics and special initial conditions can justify the folk concept of causation to a significant extent. I explain why our causal concept works so well even though at bottom it is comprised of a patchwork of principles that don't mesh well. (shrink)
Non-presentist A-theories of time (such as the growing block theory and the moving spotlight theory) seem unacceptable because they invite skepticism about whether one exists in the present. To avoid this absurd implication, Peter Forrest appeals to the "Past is Dead hypothesis," according to which only beings in the objective present are conscious. We know we're present because we know we're conscious, and only present beings can be conscious. I argue that the dead pasthypothesis undercuts (...) the main reason for preferring non-presentist A-theories to their presentist rivals, rivals which straightforwardly avoid skepticism about the present. (shrink)
In the past, hypothesis testing in medicine has employed the paradigm of the repeatable experiment. In statistical hypothesis testing, an unbiased sample is drawn from a larger source population, and a calculated statistic is compared to a preassigned critical region, on the assumption that the comparison could be repeated an indefinite number of times. However, repeated experiments often cannot be performed on human beings, due to ethical or economic constraints. We describe a new paradigm for hypothesis (...) testing which uses only rearrangements of data present within the observed data set. The token swap test, based on this new paradigm, is applied to three data sets from cardiovascular pathology, and computational experiments suggest that the token swap test satisfies the Neyman Pearson condition. (shrink)
In "How Do We Know It Is Now Now?" David Braddon-Mitchell (Analysis 2004) develops an objection to the thesis that the past is real but the future is not. He notes my response to this, namely that the past, although real, is lifeless and (a fortiori?) lacking in sentience. He argues, however, that this response, which I call 'the past is dead hypothesis', is not tenable if combined with 'special relativity'. My purpose in this reply is (...) to argue that, on the contrary, 'special relativity' supports the thesis that the future is unreal. (shrink)
Of the many tasks undertaken in science, one is striking both in its scope and the epistemic difficulties it faces: the reconstruction of the deep past. Such reconstruction provides the resources to successfully explain puzzling extant traces, from fossils to radiation signatures, often in the absence of extensive and repeatable observations—the hallmark of good epistemic support. Yet good explanations do not come for free. Evidence can fail, in practice or in principle, to support one hypothesis over another (underdetermination). (...) And when hypotheses do confront conflicting evidence, identifying the piece of theory to abandon can be notoriously difficult (testing holism). Good science, in any discipline, must overcome these challenges. (shrink)
This paper first advances and discusses the hypothesis that so-called “iconic” or (for the auditory sphere) “echoic” memory is actually a form of perception of the past. Such perception is made possible by parallel inputs with differential delays which feed independently into the sensorium. This hypothesis goes well together with a set of related psychological and phenomenological facts, as for example: Sperling’s results about the visual sensory buffer, the facts that we seem to see movement and hear (...) temporal Gestalts, and the fact that we sometimes seem to hear sounds only after they have stopped. In it most simple form, and formulated in the somewhat misleading information processing idiom, my hypothesis says that each one of a number of parallel input lines with different delays feeds into a spatially separate sensory unit. The set of such units then holds information about the immediate past in what one might call a “chronotopic” sensory map. This contrasts with the idea (common in sensory buffer theory) that the received sensory information is kept (while possibly decaying) in the same unit for some time after it occurred. The hypothesis also contradicts the theory that all sensory information passes through the same unit but is then successively passed through a unidirectional chain of separate units, where the past experiences then become represented (the shift register hypothesis). The main advantage of my theory, beside the natural explanations it offers for the above-mentioned kind of phenomena, is that it postulates a parallel – and therefore robust – rather than a serial mechanism for the registering of temporal information. It can of course easily be modified to fit more complex models of the sensory cerebral code(s) as well as of the chronotopic representation as such. -/- In the second part of my poster, I advance a corresponding hypothesis for those motor commands which control brief movements. At closer inspection, most socalled “ballistic” movements do not seem to be truly ballistic (in the sense in which the movement of a cannonball is so) since the brain must exert some kind of feedforward control over the later part of their trajectory. I suggest that this control is at least sometimes realized by means of differentially delayed output from a chronotopic representation of successive segments of the movement. Not only could this be a biologically natural way of ensuring efficient adaptability of the movement; the hypothesis also explains the not uncommon experience of “seeing the whole movement laid out in advance” when it is initiated. (shrink)
What new implications does the dynamical hypothesis have for cognitive science? The short answer is: None. The _Behavior and Brain Sciences _target article, “The dynamical hypothesis in cognitive science” by Tim Van Gelder is basically an attack on traditional symbolic AI and differs very little from prior connectionist criticisms of it. For the past ten years, the connectionist community has been well aware of the necessity of using (and understanding) dynamically evolving, recurrent network models of cognition.
The Extended Mind Hypothesis (EMH) needs a defence of phenomenal externalism in order to be consistent with an indispensable condition for attributing extended beliefs, concerning the conscious past endorsement of information. However, it is difficult, if not impossible, to envisage such a defence. Proponents ofthe EMH are thus confronted with a difficult dilemma: they either accept absurd attributions of belief, and thus deflate EMH, or incorporate, for compatibility reasons, the conscious past endorsement condition for extended belief attribution, (...) implying a seemingly unavailable defence of phenomenal externalism, and thus risk inconsistency within EMH. Either way, EMH is threatened. (shrink)
We consider the relation between past and future events from the perspective of the constructive episodic simulation hypothesis, which holds that episodic simulation of future events requires a memory system that allows the flexible recombination of details from past events into novel scenarios. We discuss recent neuroimaging and behavioral evidence that support this hypothesis in relation to the theater production metaphor.
Are morphological patterns learned in the form of rules? Some models deny this, attributing all morphology to analogical mechanisms. The dual mechanism model (Pinker, S., & Prince, A. (1998). On language and connectionism: analysis of a parallel distributed processing model of language acquisition. Cognition, 28, 73-193) posits that speakers do internalize rules, but that these rules are few and cover only regular processes; the remaining patterns are attributed to analogy. This article advocates a third approach, which uses multiple stochastic rules (...) and no analogy. We propose a model that employs inductive learning to discover multiple rules, and assigns them confidence scores based on their performance in the lexicon. Our model is supported over the two alternatives by new "wug test" data on English past tenses, which show that participant ratings of novel pasts depend on the phonological shape of the stem, both for irregulars and, surprisingly, also for regulars. The latter observation cannot be explained under the dual mechanism approach, which derives all regulars with a single rule. To evaluate the alternative hypothesis that all morphology is analogical, we implemented a purely analogical model, which evaluates novel pasts based solely on their similarity to existing verbs. Tested against experimental data, this analogical model also failed in key respects: it could not locate patterns that require abstract structural characterizations, and it favored implausible responses based on single, highly similar exemplars. We conclude that speakers extend morphological patterns based on abstract structural properties, of a kind appropriately described with rules. (shrink)
According to the knowledge argument, physicalism fails because when physically omniscient Mary first sees red, her gain in phenomenal knowledge involves a gain in factual knowledge. Thus not all facts are physical facts. According to the ability hypothesis, the knowledge argument fails because Mary only acquires abilities to imagine, remember and recognise redness, and not new factual knowledge. I argue that reducing Mary’s new knowledge to abilities does not affect the issue of whether she also learns factually: I show (...) that gaining specific new phenomenal knowledge is required for acquiring abilities of the relevant kind. Phenomenal knowledge being basic to abilities, and not vice versa, it is left an open question whether someone who acquires such abilities also learns something factual. The answer depends on whether the new phenomenal knowledge involved is factual. But this is the same question we wanted to settle when first considering the knowledge argument. The ability hypothesis, therefore, has offered us no dialectical progress with the knowledge argument, and is best forgotten. (shrink)
What follows for the ability hypothesis reply to the knowledge argument if knowledge-how is just a form of knowledge-that? The obvious answer is that the ability hypothesis is false. For the ability hypothesis says that, when Mary sees red for the first time, Frank Jackson’s super-scientist gains only knowledge-how and not knowledge-that. In this paper I argue that this obvious answer is wrong: a version of the ability hypothesis might be true even if knowledge-how is a (...) form of knowledge-that. To establish this conclusion I utilize Jason Stanley and Timothy Williamson’s well-known account of knowledge-how as “simply a species of propositional knowledge” (Stanley & Williamson 2001: 1). I demonstrate that we can restate the core claims of the ability hypothesis – that Mary only gains new knowledge-how and not knowledge-that – within their account of knowledge-how as a species of knowledge-that. I examine the implications of this result for both critics and proponents of the ability hypothesis. (shrink)
Borrowing conceptual tools from Bergson, this essay asks after the shift in the temporality of life from Merleau-Ponty’s Phénoménologie de la perception to his later works. Although the Phénoménologie conceives life in terms of the field of presence of bodily action, later texts point to a life of invisible and immemorial dimensionality. By reconsidering Bergson, but also thereby revising his reading of Husserl, Merleau-Ponty develops a non-serial theory of time in the later works, one that acknowledges the verticality and irreducibility (...) of the past. Life in the flesh relies on unconsciousness or forgetting, on an invisibility that structures its passage. (shrink)
David Lewis (1983, 1988) and Laurence Nemirow (1980, 1990) claim that knowing what an experience is like is knowing-how, not knowing-that. They identify this know-how with the abilities to remember, imagine, and recognize experiences, and Lewis labels their view ‘the Ability Hypothesis’. The Ability Hypothesis has intrinsic interest. But Lewis and Nemirow devised it specifically to block certain anti-physicalist arguments due to Thomas Nagel (1974, 1986) and Frank Jackson (1982, 1986). Does it?
A sentence in the Resultative perfect licenses two inferences: (a) the occurrence of an event (b) the state caused by this event obtains at evaluation time. In this paper I show that this use of the perfect is subject to a large number of distributional restrictions that all serve to highlight the result inference at the expense of the event inference. Nevertheless, only the event inference determines the truth conditions of this use of the perfect, the result inference being a (...) unique type of conventional implicature. I argue furthermore that, since the result state is singular, the event that causes it must also be singular, whereas the Experiential perfect is purely quantificational. But in out-of-the-blue contexts the past tense is also normally interpreted as singular. This leads to a certain amount of competition between the Resultative perfect and the past tense, and it is this competition, I suggest, that maintains the conventional (non-truth conditional) result state inference. (shrink)
The ‘Knobe effect’ is the name given to the empirical finding that judgments about whether an action is intentional or not seem to depend on the moral valence of this action. To account for this phenomenon, Scaife and Webber have recently advanced the ‘Consideration Hypothesis’, according to which people’s ascriptions of intentionality are driven by whether they think the agent took the outcome in consideration when taking his decision. In this paper, I examine Scaife and Webber’s hypothesis and (...) conclude that it is supported neither by the existing literature nor by their own experiments, whose results I did not replicate, and that the ‘Consideration Hypothesis’ is not the best available account of the ‘Knobe Effect’. (shrink)
The Perceptual Hypothesis is that we sometimes see, and thereby have non-inferential knowledge of, others' mental features. The Perceptual Hypothesis opposes Inferentialism, which is the view that our knowledge of others' mental features is always inferential. The claim that some mental features are embodied is the claim that some mental features are realised by states or processes that extend beyond the brain. The view I discuss here is that the Perceptual Hypothesis is plausible if, but only if, (...) the mental features it claims we see are suitably embodied. Call this Embodied Perception Theory. I argue that Embodied Perception Theory is false. It doesn't follow that the Perceptual Hypothesis is implausible. The considerations which serve to undermine Embodied Perception Theory serve equally to undermine the motivations for assuming that others' mental lives are always imperceptible. (shrink)
According to the Ability Hypothesis, knowing what it is like to have experience E is just having the ability to imagine or recognize or remember having experience E. I examine various versions of the Ability Hypothesis and point out that they all face serious objections. Then I propose a new version that is not vulnerable to these objections: knowing what it is like to experience E is having the ability todiscriminate imagining or having experience E from imagining or (...) having any other experience. I argue that if we replace the ability to imagine or recognize with the ability to discriminate, the Ability Hypothesis can be salvaged. (shrink)
Enactive approaches foreground the role of interpersonal interaction in explanations of social understanding. This motivates, in combination with a recent interest in neuroscientific studies involving actual interactions, the question of how interactive processes relate to neural mechanisms involved in social understanding. We introduce the Interactive Brain Hypothesis (IBH) in order to help map the spectrum of possible relations between social interaction and neural processes. The hypothesis states that interactive experience and skills play enabling roles in both the development (...) and current function of social brain mechanisms, even in cases where social understanding happens in the absence of immediate interaction. We examine the plausibility of this hypothesis against developmental and neurobiological evidence and contrast it with the widespread assumption that mindreading is crucial to all social cognition. We describe the elements of social interaction that bear most directly on this hypothesis and discuss the empirical possibilities open to social neuroscience. We propose that the link between coordination dynamics and social understanding can be best grasped by studying transitions between states of coordination. These transitions form part of the self-organization of interaction processes that characterize the dynamics of social engagement. The patterns and synergies of this self-organization help explain how individuals understand each other. Various possibilities for role-taking emerge during interaction, determining a spectrum of participation. This view contrasts sharply with the observational stance that has guided research in social neuroscience until recently. We also introduce the concept of readiness to interact to describe the practices and dispositions that are summoned in situations of social significance (even if not interactive). This latter idea links interactive factors to more classical observational scenarios. (shrink)
This paper introduces a new family of cases where agents are jointly morally responsible for outcomes over which they have no individual control, a family that resists standard ways of understanding outcome responsibility. First, the agents in these cases do not individually facilitate the outcomes and would not seem individually responsible for them if the other agents were replaced by non-agential causes. This undermines attempts to understand joint responsibility as overlapping individual responsibility; the responsibility in question is essentially joint. Second, (...) the agents involved in these cases are not aware of each other's existence and do not form a social group. This undermines attempts to understand joint responsibility in terms of actual or possible joint action or joint intentions, or in terms of other social ties. Instead, it is argued that intuitions about joint responsibility are best understood given the Explanation Hypothesis, according to which a group of agents are seen as jointly responsible for outcomes that are suitably explained by their motivational structures: something bad happened because they didn’t care enough; something good happened because their dedication was extraordinary. One important consequence of the proposed account is that responsibility for outcomes of collective action is a deeply normative matter. (shrink)
Liberal theories of justice have often been unable to include the recognition of minority rights or of multiculturalism because of their emphasis on individuals. In contrast, recent theories of cultural recognition and minority rights have underestimated the tensions between group and individual rights. It is precisely the incorporation of past wrongs and their impact on present politics that can advance the liberal theory of justice for cultural minorities and their members.
Knowing one’s past thoughts and attitudes is a vital sort of self-knowledge. In the absence of memorial impressions to serve as evidence, we face a pressing question of how such self-knowledge is possible. Recently, philosophers of mind have argued that self-knowledge of past attitudes supervenes on rationality. I examine two kinds of argument for this supervenience claim, one from cognitive dynamics, and one from practical rationality, and reject both. I present an alternative account, on which knowledge of (...) class='Hi'>past attitudes is inferential knowledge, and depends upon contingent facts of one’s rationality and consistency. Failures of self-knowledge are better explained by the inferential account. (shrink)
This paper is about The Truthmaker Problem for Presentism. I spell out a solution to the problem that involves appealing to indeterministic laws of nature and branching semantics for past- and future-tensed sentences. Then I discuss a potential glitch for this solution, and propose a way to get around that glitch. Finally, I consider some likely objections to the view offered here, as well as replies to those objections.
Many philosophers and psychologists now argue that emotions play a vital role in reasoning. This paper explores one particular way of elucidating how emotions help reason which may be dubbed ?the search hypothesis of emotion?. After outlining the search hypothesis of emotion and dispensing with a red herring that has marred previous statements of the hypothesis, I discuss two alternative readings of the search hypothesis. It is argued that the search hypothesis must be construed as (...) an account of what emotions typically do, rather than as a definition of emotion. Even as an account of what emotions typically do, the search hypothesis can only be evaluated in the context of a specific theory of what emotions are. 1 Introduction 2 The search hypothesis of emotion 3 A red herring: the frame problem 4 The search problem 5 Two readings of the search hypothesis 6 Two final remarks 7 Conclusion. (shrink)
This paper defends the claim that, in order to have a concept of time, subjects must have memories of particular events they once witnessed. Some patients with severe amnesia arguably still have a concept of time. Two possible explanations of their grasp of this concept are discussed. They take as their respective starting points abilities preserved in the patients in question: (1) the ability to retain factual information over time despite being unable to recall the past event or situation (...) that information stems from, and (2) the ability to remember at least some past events or situations themselves (typically because retrograde amnesia is not complete). It is argued that a satisfactory explanation of what it is for subjects to have a concept of time must make reference to their having episodic memories such as those mentioned under (2). It is also shown how the question as to whether subjects have such memories, and thus whether they possess a concept of time, enters into our explanation of their actions. (shrink)
Research on patients with damage to ventromedial frontal cortices suggests a key role for emotions in practical decision making. This field of investigation is often associated with Antonio Damasio’s Somatic Marker Hypothesis—a putative account of the mechanism through which autonomic tags guide decision making in typical individuals. Here we discuss two questionable assumptions—or ‘myths’—surrounding the direction and interpretation of this research. First, it is often assumed that there is a single somatic marker hypothesis. As others have noted, however, (...) Damasio’s ‘hypothesis’ admits of multiple interpretations (Dunn et al. ; Colombetti ). Our analysis builds upon this point by characterizing decision making as a multi-stage process and identifying the various potential roles for somatic markers. The second myth is that the available evidence suggests a role for somatic markers in the core stages of decision making, that is, during the generation, deliberation, or evaluation of candidate options. On the contrary, we suggest that somatic markers most likely have a peripheral role, in the recognition of decision points, or in the motivation of action. This conclusion is based on an examination of the past twenty-five years of research conducted by Damasio and colleagues, focusing in particular on some early experiments that have been largely neglected by the critical literature. 1 Introduction2 What is the Somatic Marker Model?3 Multiple Somatic Marker Hypotheses3.1 Are somatic markers necessary for practical decision making?3.2 Speed, accuracy, or both?3.3 At which of the five stages of decision making are somatic markers engaged?4 Anecdotal Evidence Suggests a Peripheral Role for Somatic Markers4.1 Chronic indecisiveness4.2 Extreme impulsiveness4.3 Enhanced decision making in the lab4.4 Lack of motivation. 5 Early Experiments Suggest that VMF Damage Leaves Core Processes Intact5.1 The evocative images study5.2 Five problem solving tasks6 Recent Experiments Fail to Discriminate among Alternate Versions of SMH7 Conclusion. (shrink)
Although it could avoid some harmful effects of climate change, sulphate aerosol geoengineering (SAG), or injecting sulphate aerosols into the stratosphere in order to reflect incoming solar radiation, threatens substantial harm to humans and non-humans. I argue that SAG is prima facie ethically problematic from anthropocentric, animal liberationist, and biocentric perspectives. This might be taken to suggest that ethical evaluations of SAG can rely on Bryan Norton's convergence hypothesis, which predicts that anthropocentrists and non-anthropocentrists will agree to implement the (...) same or similar environmental policies. However, there are potential scenarios in which anthropocentrists and non-anthropocentrists would seem to diverge on whether a particular SAG policy ought to be implemented. This suggests that the convergence hypothesis should not be relied on in ethical evaluation of SAG. Instead, ethicists should consider the merits and deficiencies of both non-anthropocentric perspectives and the ethical evaluations of SAG such perspectives afford. (shrink)
The probability that a fair coin tossed yesterday landed heads is either 0 or 1, but the probability that it would land heads was 0.5. In order to account for the latter type of probabilities, past probabilities, a temporal restriction operator is introduced and axiomatically characterized. It is used to construct a representation of conditional past probabilities. The logic of past probabilities turns out to be strictly weaker than the logic of standard probabilities.
We offer a formal account of the English past tenses. We see the perfect as having reference time at speech time and the preterite as having reference time at event time. We formalize four constraints on reference time, which we bundle together under the term ‘perspective’. Once these constraints are satisfied at the different reference times of the perfect and preterite, the contrasting functions of these tenses are explained. Thus we can account formally for the ‘definiteness effect’ and the (...) ‘lifetime effect’ of the perfect, for the fact that the perfect seems to ‘explain’ something about the present, and that the perfect cannot presuppose a past time point. We explain why perfect and preterite can sometimes be interchangeable, and we offer a solution to the ’present perfect puzzle’. We explain the unacceptability of notorious examples of the perfect such as * Gutenberg has discovered the art of printing . We give greater definition to the familiar notions of ‘current relevance’ and ‘extended now’. (shrink)
I argue that David Lewis’s attempt, in his ‘Counterfactual Dependence and Time’s Arrow’, to explain the fixity of the past in terms of counterfactual independence is unsuccessful. I point out that there is an ambiguity in the claim that the past is counterfactually independent of the present (or, more generally, that the earlier is counterfactually independent of the later), corresponding to two distinct theses about the relation between time and counterfactuals, both officially endorsed by Lewis. I argue that (...) Lewis’s attempt is flawed for a variety of reasons, including the fact that his own theory about the evaluation of counterfactuals requires too many exceptions to the general rule that the past is counterfactually independent of the present. At the end of the paper, I consider a variant of Lewis’s strategy that attempts to explain the fixity of the past in terms of causal, rather than counterfactual, independence. I conclude that, although this variant avoids some of the objections that afflict Lewis’s account, it nevertheless seems to be incapable of giving a satisfactory explanation of the notion of the fixity of the past. (shrink)
This paper argues that the sublime feeling can only announce itself as a paradoxical mixture of pain and pleasure in an experience of a lost or irrevocable past. Presenting the typical evanescence and inevitable deferral of the past in musical terms, this paper rewrites the sublime feeling as a musical feeling: a suspended feeling wavering in-between apparently opposite intensities of tension and respite. This suspended feeling is analyzed through a juxtaposition of the sublime with Sehnsucht, or the potentially (...) endless longing for an irretrievable past, and trauma, or the potentially endless rehearsal of an unforgettable past. (shrink)
When addressing the notion of proper time in the theory of relativity, it is usually taken for granted that the time read by an accelerated clock is given by the Minkowski proper time. However, there are authors like Harvey Brown that consider necessary an extra assumption to arrive at this result, the so-called clock hypothesis. In opposition to Brown, Richard TW Arthur takes the clock hypothesis to be already implicit in the theory. In this paper I will present (...) a view different from these authors by recovering Einstein's notion of natural clock and showing its relevance to the debate. (shrink)
Against Russell’s skeptical conjecture, that the world and its entire population came into existence five minutes ago, it is argued that any one of the following is logically incompatible with the conjunction of the other two: ostensible memories of certain events, records of such events, and the non-occurrence of these same events. This conclusion is reached through a critical examination of (1) the arguments advanced by Norman Malcolm in trying to show that Russell’s “hypothesis” does not express a logical (...) possibility, and (2) the counterarguments by which James W. Cornman tries to show that it does. (shrink)
The clock hypothesis of relativity theory equates the proper time experienced by a point particle along a timelike curve with the length of that curve as determined by the metric. Is it possible to prove that particular types of clocks satisfy the clock hypothesis, thus genuinely measure proper time, at least approximately? Because most real clocks would be enormously complicated to study in this connection, focusing attention on an idealized light clock is attractive. The present paper extends and (...) generalized partial results along these lines with a theorem showing that, for any timelike curve in any spacetime, there is a light clock that measures the curve’s length as accurately and regularly as one wishes. (shrink)
We identify a particular type of causal reasoning ability that we believe is required for the possession of episodic memories, as it is needed to give substance to the distinction between the past and the present. We also argue that the same causal reasoning ability is required for grasping the point that another person's appeal to particular past events can have in conversation. We connect this to claims in developmental psychology that participation in joint reminiscing plays a key (...) role in memory development. (shrink)
This paper discusses the individuation of characters for the use asunits by geneticists at the beginning of the 20th century. Thediscussion involves the Presence and Absence Hypothesis as a case study. It issuggested that the gap between conceptual consideration and etiological factorsof individuating of characters is being handled by way of mutual adjustment.Confrontation of a suggested morphological unit character with experimentresults molded the final boundaries of it.
The uncanny valley hypothesis (Mori, 1970) predicts differential experience of negative and positive affect as a function of human likeness. Affective experience of realistic humanlike robots and computer-generated characters (avatars) dominates “uncanny” research, but findings are inconsistent. How objects are actually perceived along the hypothesis’ dimension of human likeness (DOH), defined only in terms of human physical similarity, is unknown. To examine whether the DOH can be defined also in terms of effects of categorical perception (CP), stimuli from (...) morph continua with controlled differences in physical human likeness between avatar and human faces as endpoints were presented. Two behavioural studies found a sharp category boundary along the DOH and enhanced visual discrimination (i.e. CP) of fine-grained differences between face pairs at the category boundary. Discrimination was better for face pairs that presented category change in the human-to-avatar than avatar-to-human direction along DOH. To investigate brain representation of physical and category change within the uncanny valley hypothesis’ framework, an event-related fMRI study used the same stimuli in a paired repetition-priming paradigm. Bilateral mid-fusiform areas and a different right mid-fusiform area were sensitive to physical change within the human and avatar categories, respectively, whereas entirely different regions were sensitive to the human-to-avatar (caudate head, putamen, thalamus, red nucleus) and avatar-to-human (hippocampus, amygdala, mid-insula) direction of category change. Our findings show that Mori's DOH definition does not reflect subjective perception of human likeness and suggest that future “uncanny” studies consider CP and the DOH category structure in guiding experience of nonhuman objects. (shrink)
The Uncanny Valley Hypothesis (Mori, 1970) predicts that perceptual difficulty distinguishing between a humanlike object (e.g., lifelike prosthetic hand, mannequin) and its human counterpart evokes negative affect. Research has focussed on affect, with inconsistent results, but little is known about how objects along the hypothesis’ dimension of human likeness (DHL) are actually perceived. This study used morph continua based on human and highly realistic computer-generated (avatar) faces to represent the DHL. Total number and dwell time of fixations to (...) facial features were recorded while participants (N=60) judged avatar vs. human category membership of the faces in a forced choice categorisation task. Fixation and dwell data confirmed the face feature hierarchy (eyes, nose and mouth in this order of importance) across the DHL. There were no further findings for fixation. A change in the relative importance of these features was found for dwell time, with greater preferential processing of eyes and mouth of categorically ambiguous faces compared with unambiguous avatar faces. There were no significant differences between ambiguous and human faces. These findings applied for men and women, though women generally dwelled more on the eyes to the disadvantage of the nose. The mouth was unaffected by gender. In summary, the relative importance of facial features changed on the DHL’s nonhuman side as a function of categorisation ambiguity. This change was indicated by dwell time only, suggesting greater depth of perceptual processing of the eyes and mouth of ambiguous faces compared with these features in unambiguous avatar faces. (shrink)
According to Stalnaker’s Hypothesis, the probability of an indicative conditional, $\Pr(\varphi \rightarrow \psi),$ equals the probability of the consequent conditional on its antecedent, $\Pr(\psi | \varphi)$ . While the hypothesis is generally taken to have been conclusively refuted by Lewis’ and others’ triviality arguments, its descriptive adequacy has been confirmed in many experimental studies. In this paper, we consider some possible ways of resolving the apparent tension between the analytical and the empirical results relating to Stalnaker’s Hypothesis (...) and we argue that none offer a satisfactory resolution. (shrink)
According to the evolutionary hypothesis of Silverman and Eals (1992, Sex differences in spatial abilities: Evolutionary theory and data. In J. H. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture (pp. 533–549). Oxford: Oxford University Press), women evolutionary hypothesis, women surpass men in object location memory as a result of a sexual division in foraging activities among early humans. After surveying the main anthropological information on ancestral sex-related foraging, we (...) review the evidence on how robust women’s advantage in object location memory is. This leads us to suggest that the functional understanding of this type of memory would benefit from comparing men and women in carefully designed and ecologically meaningful cognitive contexts involving, for instance, incidental versus intentional settings that call for either the absolute or relative encoding of the locations of common versus uncommon objects. (shrink)
Behaving organisms are continually choosing. Recently the theoretical and empirical study of decision making by behavioral ecologists and experimental psychologists have converged in the area of foraging, particularly food acquisition. This convergence has raised the interdisciplinary question of whether principles that have emerged from the study of decision making in the operant conditioning laboratory are consistent with decision making in naturally occurring foraging. One such principle, the developed in studies of choice in the operant conditioning laboratory, states that the effectiveness (...) of a stimulus as a reinforcer may be predicted most accurately by calculating the decrease in time to food presentation correlated with the onset of the stimulus, relative to the length of time to food presentation measured from the onset of the preceding stimulus. Since foraging involves choice, the delay-reduction hypothesis may be extended to predict aspects of foraging. We discuss the strategy of assessing parameters of foraging with operant laboratory analogues to foraging. We then compare the predictions of the delay-reduction hypothesis with those of optimal foraging theory, developed by behavioral ecologists, showing that, with two exceptions, the two positions make comparable predictions. The delay-reduction hypothesis is also compared to several contemporary pscyhological accounts of choice. Results from several of our experiments with pigeons, designed as operant conditioning simulations of foraging, have shown the following: The more time subjects spend searching for or traveling between potential food sources, the less selective they become, that is, the more likely they are to accept the less preferred outcome; increasing time spent procuring () food increases selectivity; how often the preferred outcome is available has a greater effect on choice then how often the less preferred outcome is available; subjects maximize reinforcement whether it is the rate, amount, or probability of reinforcement that is varied; there are no significant differences between subjects performing under different types of deprivation (open vs. closed economies). These results are all consistent with the delay-reduction hypothesis. Moreover, they suggest that the technology of the operant conditioning laboratory may have fruitful application in the study of foraging, and, in doing so, they underscore the importance of an interdisciplinary approach to behavior. (shrink)
Natural experiments wherein preferred marriage partners are co-reared play a central role in testing the Westermarck hypothesis. This paper reviews two such hitherto largely neglected experiments. The case of the Karo Batak is outlined in hopes that other scholars will procure additional information; the case of the Oneida community is examined in detail. Genealogical records reveal that, despite practicing communal child-rearing, marriages did take place within Oneida. However, when records are compared with first-person accounts, it becomes clear that, owing (...) to age- and gender-segregating practices, most endogamously marrying individuals probably did not share a history of extensive propinquity. (shrink)