According to the Generality Constraint, mental states with conceptual content must be capable of recombining in certain systematic ways. Drawing on empirical evidence from cognitive science, I argue that so-called analog magnitude states violate this recombinability condition and thus have nonconceptual content. I further argue that this result has two significant consequences: it demonstrates that nonconceptual content seeps beyond perception and infiltrates cognition; and it shows that whether mental states have nonconceptual content is largely an empirical matter determined by (...) the structure of the neural representations underlying them. (shrink)
In his book Mind and World, John McDowell grapples with the problem that the world must and yet seemingly cannot constrain our empirical thought. I first argue that McDowell’s proposed solution to the problem throws him onto the horns of his own, intractable dilemma, and thus fails to solve the problem of rational constraint by the world. Next, I will argue that Wilfrid Sellars, in a series of articles written in the 1950s and 60s, provides the tools to solve (...) the dilemma McDowell sets before us. We will see how, borrowing from Sellars and certain neo-Sellarsians, we can solve the problem of rational constraint by perception without resorting to a McDowellian quasi-enchantment of the world. (shrink)
Andrea Westlund's account of love involves lovers becoming a Plural Subject mirroring Margaret Gilbert's Plural Subject Theory. However, while for Gilbert the creation of a plural will involves individuals jointly committing to pool their wills and the plural will directly normatively constraining those individuals, Westlund, in contrast, sees the creation of a plural will as a continual process thus rejecting the possibility of such direct normative constraint. This rejection appears to be required to explain the flexibility that allows for (...) a central place for reciprocity in loving relationships. However, this paper argues against the existence of such flexibility and presents instead the case that variance in the normative pain of rebelling against the collective will can be accommodated by replacing Gilbert's notion of all-or-nothing pooling of wills with an account that see wills as becoming entangled through levels of identification with the plural subject. (shrink)
I develop a variant of the constraint interpretation of the emergence of purely physical (non-biological) entities, focusing on the principle of the non-derivability of actual physical states from possible physical states (physical laws) alone. While this is a necessary condition for any account of emergence, it is not sufficient, for it becomes trivial if not extended to types of constraint that specifically constitute physical entities, namely, those that individuate and differentiate them. Because physical organizations with these features are (...) in fact interdependent sets of such constraints, and because such constraints on physical laws cannot themselves be derived from physical laws, physical organization is emergent. These two complementary types of constraint are components of a complete non-reductive physicalism, comprising a non-reductive materialism and a non-reductive formalism. (shrink)
We observe a number of connections between recent developments in the study of constraint satisfaction problems, irredundant axiomatisation and the study of topological quasivarieties. Several restricted forms of a conjecture of Clark, Davey, Jackson and Pitkethly are solved: for example we show that if, for a finite relational structure M, the class of M-colourable structures has no finite axiomatisation in first order logic, then there is no set (even infinite) of first order sentences characterising the continuously M-colourable structures amongst (...) compact totally disconnected relational structures. We also refute a rather old conjecture of Gorbunov by presenting a finite structure with an infinite irredundant quasi-identity basis. (shrink)
The so-called "adaptationism" of mainstream evolutionary biology has been criticized from a variety of sources. One, which has received relatively little philosophical attention, is developmental biology. Developmental constraints are said to be neglected by adaptationists. This paper explores the divergent methodological and explanatory interests that separate mainstream evolutionary biology from its embryological and developmental critics. It will focus on the concept of constraint itself; even this central concept is understood differently by the two sides of the dispute.
The concept of developmental constraint was at the heart of developmental approaches to evolution of the 1980s. While this idea was widely used to criticize neo-Darwinian evolutionary theory, critique does not yield an alternative framework that offers evolutionary explanations. In current Evo-devo the concept of constraint is of minor importance, whereas notions as evolvability are at the center of attention. The latter clearly defines an explanatory agenda for evolutionary research, so that one could view the historical shift from (...) ‘developmental constraint’ towards ‘evolvability’ as the move from a concept that is a mere tool of criticism to a concept that establishes a positive explanatory project. However, by taking a look at how the concept of constraint was employed in the 1980s, I argue that developmental constraint was not just seen as restricting possibilities (‘constraining’), but also as facilitating morphological change in several ways. Accounting for macroevolutionary transformation and the origin of novel form was an aim of these developmental approaches to evolution. Thus, the concept of developmental constraint was part of a positive explanatory agenda long before the advent of Evo-devo as a genuine scientific discipline. In the 1980s, despite the lack of a clear disciplinary identity, this concept coordinated research among paleontologists, morphologists, and developmentally inclined evolutionary biologists. I discuss the different functions that scientific concepts can have, highlighting that instead of classifying or explaining natural phenomena, concepts such as ‘developmental constraint’ and ‘evolvability’ are more important in setting explanatory agendas so as to provide intellectual coherence to scientific approaches. The essay concludes with a puzzle about how to conceptually distinguish evolvability and selection. (shrink)
Although most of the contemporary debates around subjectivity are framed by a rejection of the metaphysical subject, more time needs to be spent developing the implications of abandoning the meta-physics of constraint. Doing so provides the key to approaching our pressing problem that concerns freedom, and only once invisible, ideal "constraints" have been adequately understood will all of the contemporary puzzlement that concerns intentional resistance to power be assuaged. While Sartre does not solve the problem of freedom bequeathed to (...) us by Foucault, it is clear that he struggled with similar issues, and that his work sheds important light on the issue of ideal constraint. Once more, on Sartre's second view, power and freedom are not mutually exclusive, and in this he advances over much contemporary liberal thought. Thus, on the approach of what would be Sartre's hundredth birthday, I invite others to take this opportune moment to reevaluate the early work of this once shining philosophical star, only recently and perhaps prematurely eclipsed by anti-humanism, and recognize that now, more than ever, Sartre's thought is relevant to our very pressing concerns. (shrink)
This paper is concerned with a quality space model as an account of the intelligibility of explanation. I argue that descriptions of causal or functional roles (Chalmers Levine, 2001) are not the only basis for intelligible explanations. If we accept that phenomenal concepts refer directly, not via descriptions of causal or functional roles, then it is difficult to find role fillers for the described causal roles. This constitutes a vagueness constraint on the intelligibility of explanation. Thus, I propose to (...) use quality space models to develop a systematic way of studying different modalities of perception and feelings, e.g., visual and auditory perception, pain, and emotion, that can reveal some structural relations among these modalities. It might turn out that topological explanation can be more intelligible than causal explanation in this case. I discuss two accounts of a quality space for color vision (Clark, 2000; Rosenthal, 2010) and propose how to construct a quality space for pain. Daniel Kostic is Associated Researcher at Berlin School of Mind and Brain. (shrink)
According to the view that Peacocke elaborates in A Study of Concepts (1992), a concept can be individuated by providing the conditions a thinker must satisfy in order to possess that concept. Hence possessions conditions for concepts should be specifiable in a way that respects a non-circularity constraint. In a more recent paper “Implicit Conceptions, Understanding and Rationality” (1998a) Peacocke argues against his former view, in the light of the phenomenon of rationally accepting principles which do not follow from (...) what the thinker antecedently accepts. In this paper I defend the view of the book from his more recent criticisms, claiming that the noncircularity constraint should be respected, and that Peacocke's more recent insights could be accommodated in the framework of his former theory of concepts. (shrink)
Robert Adams, in Finite and Infinite Goods: A Framework for Ethics, suggests a moral constraint on our obedience to God's commands: if a purportedly divine command seems abhorrently evil, then we should infer that it is not really God so commanding. I suggest that in light of his commitments to God as the standard of goodness, to the transcendence of God, and to a critical stance towards ethics, Adams should be willing to consider the possibility of a good God (...) commanding us to do something that seems abhorrently evil to us, but really is good according to His transcendent goodness. I suggest that the ought-to-is moral constraint that Adams advocates is only appropriate when we are not certain that it is God giving the command, and that an is-to-ought constraint based on psychological certainty should be the ultimate constraint on our obedience to purportedly divine commands. This constraint advocates that if one is certain upon reflection that a command is from God, then one should obey that command, regardless of how evil it seems. After responding to several objections to this psychological constraint, I offer my own qualification, according to which it is appropriate to disobey a command that one is certain is from God if one cannot conceive that the command is good. Finally, I offer some reason to think that, contrary to Adams's assertions, the project of considering how to react to a purportedly divine command that also seems abhorrently evil is worth both philosophic and spiritual energy. (shrink)
Introduction -- Instrumental rationality -- Social order -- Deontic constraint -- Intentional states -- Preference noncognitivism -- A naturalistic perspective -- Transcendental necessity -- Weakness of will -- Normative ethics.
Those who endorse the Psychological Continuity Approach (PCA) to analyzing personal identity need to impose a non-branching constraint to get the intuitively correct result that in the case of fission, one person becomes two. With the help of Brueckner's (2005) discussion, it is shown here that the sort of non-branching clause that allows proponents of PCA to provide sufficient conditions for being the same person actually runs contrary to the very spirit of their theory. The problem is first presented (...) in connection with perdurantist versions of PCA. The difficulty is then shown to apply to endurantist versions as well. (shrink)
This paper offers a new definition of "adaptationism". An evolutionary account is adaptationist, it is suggested, if it allows for multiple independent origins for the same function -- i.e., if it violates the "Unique Origin Constraint". While this account captures much of the position Gould and Lewontin intended to stigmatize, it leaves it open that adaptationist accounts may sometimes be appropriate. However, there are many important cases, including that of human rationality, in which it is not.
I argue that we should not adopt categorial restrictions on the significance of syntactically well-formed strings. Even syntactically well-formed but semantically absurd strings, such as ‘Life is but a walking shadow’ and ‘Caesar is a prime number’, can express thoughts; and competent thinkers both can and ought to be able to grasp such thoughts. A more specific way of putting this claim is that Gareth Evans’ Generality Constraint should be viewed as a fully general constraint on concept possession (...) and propositional thought, even though Evans himself accepted only a categorially-restricted version of the Constraint. I establish this by arguing, first, that even well-formed but semantically cross-categorial strings often do possess substantive inferential roles; second, that hearers exploit these inferential roles in interpreting such strings metaphorically; and third, that there is no good reason to deny truth-conditions to strings with inferential roles. (shrink)
Whether certain objects compose a whole at a given time does not seem to depend on anything other than the character of those objects and the relations between them. This observation suggests a far-reaching constraint on theories of composition. One version of the constraint has been explicitly adopted by van Inwagen and rules out his own answer to the composition question. The constraint also rules out the other well-known moderate answers that have so far been proposed.
Immoralists hold that in at least some cases, moral ﬂ aws in artworks can increase their aesthetic value. They deny what I call the valence constraint: the view that any effect that an artwork’s moral value has on its aesthetic merit must have the same valence. The immoralist offers three arguments against the valence constraint. In this paper I argue that these arguments fail, and that this failure reveals something deep and interesting about the relationship between cognitive and (...) moral value. In the ﬁ nal section I offer a positive argument for the valence constraint. (shrink)
Whether certain objects compose a whole at a given time does not seem to depend on anything other than the character of those objects and the relations between them. This observation suggests a far-reaching constraint on theories of composition. One version of the constraint has been explicitly adopted by van Inwagen and rules out his own answer to the composition question. The constraint also rules out the other well-known moderate answers that have so far been proposed.
This paper argues that there is a general constraint on the evolution of culture. This constraint – what I am calling the Fundamental Constraint – must be satisfied in order for a cultural system to be adaptive. The Fundamental Constraint is this: for culture to be adaptive there must be a positive correlation between the fitness of cultural variants and their fitness impact on the organisms adopting those variants. Two ways of satisfying the Fundamental Constraint (...) are introduced, structural solutions and evaluative solutions. Because of the limitations on these solutions, this constraint helps explain why there is not more culture in nature, why the culture that does exist has the form it has, and why complex, cumulative culture is restricted to the human species. (shrink)
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule that equates (...) the expectation values of certain functions with their empirical averages. There are, however, various other ways in which one can construct constraints from empirical data, which makes the maximum entropy principle lead to very different probability assignments. This paper shows that an argument by Jaynes to justify the usual constraint rule is unsatisfactory and investigates several alternative choices. The choice of a constraint rule is also shown to be of crucial importance to the debate on the question whether there is a conflict between the methods of inference based on maximum entropy and Bayesian conditionalization. (shrink)
Behavioral scientists studied behavior; cognitive scientists study what generates behavior. Cognitive science is hence theoretical behaviorism (or behaviorism is experimental cognitivism). Behavior is data for a cognitive theorist. What counts as a theory of behavior? In this paper, a methodological constraint on theory construction -- "neoconstructivism" -- will be proposed (by analogy with constructivism in mathematics): Cognitive theory must be computable; given an encoding of the input to a behaving system, a theory must be able to compute (an encoding (...) of) its outputs. It is a mistake to conclude, however, that this constraint requires cognitive theory to be computational, or that it follows from this that cognition is computation. (shrink)
The FASB in its Conceptual Framework has set high principles in the ethics of standard-setting in accounting. This paper concentrates on what the FASB calls the cost/benefit constraint, i.e., the commitment to setting an accounting standard only when the benefits of the standard exceeds the costs of that standard toall stakeholders. This constraint is supposed to take precedence over other concerns, such as neutrality (freedom from bias) of account information.The major conclusion of this paper is that a conflict (...) exists between the FASB's commitment and its practice. There is no evidence that the FASB has always made a costs and benefits judgement with respect to proposed standards. In the cases when such a judgement is made, the FASB discounts social costs; therefore, it is not considering costs to all stakeholders. At the same time the FASB discounts social costs, it seems to have an undue concern for standards that do not increase the volatility of net income. The Conceptual Framework explicitly defines costs as the costs to society as a whole. (shrink)
The everyday virtue of civility functions as a constraint upon informal social pressures. Can civility also be understood, as John Rawls has proposed, as a distinctively political constraint? I contrast Rawls's project of constraining the political with Mill's of constraining both the social and the political, and explore Rawls's account of the relation between the two. I argue that Rawls's political duty of civility rests on the assumption that the political is peculiarly coercive; ignores the social enforcement of (...) morality; and implausibly has civility apply to motives in acting, rather than to actions. (shrink)
Judith Jarvis Thomson concludes “A Defense of Abortion” with a discussion of samaritanism. Whereas her rights-based arguments demonstrate the moral permissibility of virtually all abortions, this new consideration of samaritanism provides grounds for morally objecting to certain abortions that are otherwise morally pemissible given strictly rights-based considerations. I argue, first, that this samaritanism constraint on the moral permissibility of abortion involves an appeal to virtue-theoretical considerations. I then show why this hybridization of rights-based considerations and virtue-theoretical considerations has advantages (...) over responses to the moral status of abortion that are either exclusively rights-based, or else exclusively virtue-theoretical. I conclude by offering some thoughts on how to utilize this hybrid strategy outside of Thomson’s particular context, as well as why we might generally favor such a strategy in our moral reasoning. (shrink)
Recent work has shown that preschool-aged children and adults understand freedom of choice regardless of culture, but that adults across cultures differ in perceiving social obligations as constraints on action. To investigate the development of these cultural differences and universalities, we interviewed school-aged children (4–11) in Nepal and the United States regarding beliefs about people's freedom of choice and constraint to follow preferences, perform impossible acts, and break social obligations. Children across cultures and ages universally endorsed the choice to (...) follow preferences but not to perform impossible acts. Age and culture effects also emerged: Young children in both cultures viewed social obligations as constraints on action, but American children did so less as they aged. These findings suggest that while basic notions of free choice are universal, recognitions of social obligations as constraints on action may be culturally learned. (shrink)
This paper presents the Constraint Language for Lambda Structures(CLLS), a first-order language for semantic underspecification thatconservatively extends dominance constraints. It is interpreted overlambda structures, tree-like structures that encode -terms. Based onCLLS, we present an underspecified, uniform analysis of scope,ellipsis, anaphora, and their interactions. CLLS solves a variablecapturing problem that is omnipresent in scope underspecification andcan be processed efficiently.
Palmer's “isomorphism constraint” presupposes the logical possibility of two qualitatively disparate sets of sensory experiences exhibiting the same relationships. Two arguments are presented to demonstrate that, because such a state of affairs cannot be coherently specified, its occurrence is not logically possible. The prospects for behavioral and biological science are better than Palmer suggests; those for functionalism are worse.
According to the view that Peacocke elaborates in _A Study of Concepts_ (1992), a concept can be individuated by providing the conditions a thinker must satisfy in or- der to possess that concept. Hence possessions conditions for concepts should be specifiable in a way that respects a non-circularity constraint. In a more recent paper.
Duncan MacIntosh has argued that David Gauthier's notion of a constrained maximization disposition faces a dilemma. For if such a disposition is revocable, it is no longer rational come the time to act on it, and so acting on it is not (as Gauthier argues) rational; but if it is not revocable, acting on it is not voluntary. This paper is a response to MacIntosh's dilemma. I introduce an account of rational intention of a type which has become increasingly and (...) independently prominent in the literature, and argue that, on this account, rational and voluntary constraint is possible. (shrink)
We present a rendering of some common grammatical formalisms in terms of evolving algebras. Though our main concern in this paper is on constraint-based formalisms, we also discuss the more basic case of context-free grammars. Our aim throughout is to highlight the use of evolving algebras as a specification tool to obtain grammar formalisms.
I distinguish between being cognisant and being able to perform intelligent operations. The former, but not the latter, minimally involves the capacity to make adequate judgements about one's relation to objects in the environment. The referential nature of cognisance entails that the mental states of cognisant systems must be inter-related holistically, such that an individual thought becomes possible because of its relation to a system of potential thoughts. I use Gareth Evans' 'Generality Constraint' as a means of describing how (...) the reference and holism of mental states in cognisant systems are mutually dependent. Next, I describe attempts to deny the relevance of holism and reference by positing a mentalese. These attempts fail because the meanings of symbols are under determined, with there being no principled means of distinguishing between the mental tokening of a symbol and its disambiguation. I argue that the connectionist meta-theory does not encounter this problem because it is able to encompass the holism of the mental. Recent attempts to show that symbol processing theories of thought must be preferred to connectionist theories do not work. Despite appearances to the contrary, the Generality Constraint favours connectionist not symbol-processing theories. (shrink)
The Newell Test is an important step in advancing our understanding of cognition. One critical constraint is missing from this test: A cognitive architecture must be self-contained. ACT-R and connectionism fail on this account. I present an alternative proposal, called Distributed Adaptive Control (DAC), and expose it to the Newell Test with the goal of achieving a clearer specification of the different constraints and their relationships, as proposed by Anderson & Lebiere (A&L).
The orderly output constraint (OOC) is extraneous. Talkers “speak in lines” in its absence. Further, there is no perceptual motivation for an OOC; perceivers ignore the linearity between F2 at consonant-vowel onset and F2 in the vowel. In any case, the analogy with bat and barn owl localization systems underlying the theory is extreme, Sussman et al.'s comments to the contrary notwithstanding.
There is a widespread belief that, conceptually, justice cannot require what we cannot achieve. This belief is sometimes used by defenders of so-called ‘non-ideal theories of justice’ to criticise so-called ‘ideal theories of justice’. I refer to this claim as ‘the feasibility constraint on the concept of justice’ and argue against it. I point to its various implausible implications and contend that a willingness to apply the label ‘unjust’ to some regrettable situations that we cannot fix is going to (...) enhance the action-guiding potential of a conception of justice, by providing an aspirational ideal. This is possible on the condition that, at all times, we cannot specify with certainty the limits of what is feasible for us collectively. The rejection of the feasibility constraint entails that there can be injustice without perpetrators; this is a theoretical price worth paying. (shrink)
It is widely mooted that a plausible computational cognitive model should involve both symbolic and connectionist components. However, sound principles for combining these components within a hybrid system are currently lacking; the design of such systems is oftenad hoc. In an attempt to ameliorate this we provide a framework of types of hybrid systems and constraints therein, within which to explore the issues. In particular, we suggest the use of system independent constraints, whose source lies in general considerations about cognitive (...) systems, rather than in particular technological or task-based considerations. We illustrate this through a detailed examination of an interruptibility constraint: handling interruptions is a fundamental facet of cognition in a dynamic world. Aspects of interruptions are delineated, as are their precise expression in symbolic and connectionist systems. We illustrate the interaction of the various constraints from interruptibility in the different types of hybrid systems. The picture that emerges of the relationship between the connectionist and the symbolic within a hybrid system provides for sufficient flexibility and complexity to suggest interesting general implications for cognition, thus vindicating the utility of the framework. (shrink)
Abstract: Those who endorse the Psychological Continuity Approach (PCA) to analyzing personal identity need to impose a non-branching constraint to get the intuitively correct result that in the case of fission, one person becomes two. With the help of Brueckner's (2005) discussion, it is shown here that the sort of non-branching clause that allows proponents of PCA to provide sufficient conditions for being the same person actually runs contrary to the very spirit of their theory. The problem is first (...) presented in connection with perdurantist versions of PCA. The difficulty is then shown to apply to endurantist versions as well. -/- . (shrink)
The famous Allen's interval relations constraint propagation algorithm was intended for linear time. Its 13 primitive relations define all the possible mutual locations of two intervals on the time-axis. In this paper an application of the algorithm for non-linear time is suggested. First, a new primitive relation is added. It is called excludes since an occurrence of one event in a certain course of events excludes an occurrence of the other event in this course. Next, new composition rules for (...) relations between intervals are presented: some of the old rules are extended by the relation excludes, and entirely new ones are formulated for composing the relation excludes with the other relations. Four different composition tables are considered. The choice of a composition table depends on whether time is branching or not, and whether intervals can contain non-collinear subintervals or not. (shrink)
To explain phenomenon R by showing how mechanism M yields output R each time it is triggered by circumstances C, is to give a causal explanation of R. This paper analyses what mechanistic analysis can contribute to our understanding of causation in general and of downward causation in particular. It is first shown, against Glennan, that the concept of causation cannot be reduced to that of mechanism. Second it is shown, against Craver and Bechtel, that mechanistic explanation allows us to (...) make sense of causal processes that cut across levels, either in bottom-up direction where a change in a part of a system causes a change in the whole, or in downward direction where a change at the level of the system causes a change at the level of its parts. I suggest construing a decision's influence on molecules in muscle cells as a global constraint. Microscopic laws determine the detailed evolution of muscle cells and glucose molecules, but this evolution is constrained by the fact that it must be compatible with the action caused by the decision. (shrink)
This paper is concerned with reasonings that purport to explain why certain organisms have certain traits by showing that their actual design is better than contrasting designs. Biologists call such reasonings ‘functional explanations’. To avoid confusion with other uses of that phrase, I call them ‘design explanations’. This paper discusses the structure of design explanations and how they contribute to scientific understanding. Design explanations are contrastive and often compare real organisms to hypothetical organisms that cannot possibly exist. They are not (...) causal but appeal to functional dependencies between an organism’s different traits. These explanations point out that because an organism has certain traits (e.g., it lives on land), it cannot be alive if the trait to be explained (e.g., having lungs) were replaced by a specified alternative (e.g., having gills). They can be understood from a mechanistic point of view as revealing the constraints on what mechanisms can be alive. (shrink)
We can think of ordinary truth-conditional semantics as giving us constraints on cognitive states. But constraints on cognitive states can be more complicated than simply believing a proposition. And we communicate more complicated constraints on cognitive states. We also communicate constraints that seem to bear on affective and conative states.
Ian Carter argues against what he calls the ?specific freedom thesis?, which claims that in asking whether our society or any individual is free, all we need or can intelligibly concern ourselves with is their freedom to do this or that specific thing. Carter claims that issues of overall freedom are politically and morally important and that, in valuing freedom as such, liberals should be committed to a measure of freedom overall. This paper argues against Carter?s further claim that rejection (...) of the specific freedom thesis requires rejection of morally based determinations of degrees of overall freedom. Using a concept of freedom as a capacity to pursue one?s interests, it is argued that the value of freedom overall is not reducible to the value of specific freedoms, and that conditions of action can be determined as constraints only within the context of their impact on freedom overall. Taking the case of coercive proposals, it is argued that we must evaluate the morality of the circumstances in which conditional proposals are made if we are to weigh the opportunities and constraints contained in the proposal to determine whether its recipient suffers a loss of overall freedom. We must therefore appeal to values other than that of liberty itself to determine degrees of liberty overall, which we require in turn to determine whether threats or offers are coercive. (shrink)
The statistical community has brought logical rigor and mathematical precision to the problem of using data to make inferences about a model’s parameter values. The TETRAD project, and related work in computer science and statistics, aims to apply those standards to the problem of using data and background knowledge to make inferences about a model’s specification. We begin by drawing the analogy between parameter estimation and model specification search. We then describe how the specification of a structural equation model entails (...) familiar constraints on the covariance matrix for all admissible values of its parameters; we survey results on the equivalence of structural equation models, and we discuss search strategies for model specification. We end by presenting several algorithms that are implemented in the TETRAD II program. (shrink)
In three experiments we studied lay observers’ attributions of responsibility for an antisocial act (homicide). We systematically varied both the degree to which the action was coerced by external circumstances and the degree to which the actor endorsed and accepted ownership of the act, a psychological state that philosophers have termed ‘identiﬁcation’. Our ﬁndings with respect to identiﬁcation were highly consistent. The more an actor was identiﬁed with an action, the more likely observers were to assign responsibility to the actor, (...) even when the action was performed under constraints so powerful that no other behavioral option was available. Our ﬁndings indicate that social cognition involving assignment of responsibility for an action is a more complex process than previous research has indicated. It would appear that laypersons’ judgments of moral responsibility may, in some circumstances, accord with philosophical views in which freedom and determinism are regarded to be compatible. (shrink)
Community standards, ethical norms, and perceptions of fairness often serve as constraints on pure profit maximizing behavior. Consider the following examples: Most hardware stores refrain from raising prices on snow shovels after a major snow storm, even where short term profits might be increased. Most employers do not lower wages for existing employees, even as unemployment in the area increases. Automobile dealerships rarely raise sticker prices to cope with the long waiting periods for a popular model. Each of these anomalies (...) is consistent with the proposition that firms increase profits subject to fairness constraints.This paper examines perceptions of fairness in the residential real estate industry and explores how community standards affect economic decision-making. The residential real estate industry is unique. One party to the transaction (the landlord) frames decisions as pure business decisions. The other party to the transaction (the tenant) frames decisions more broadly. While a tenant's choice of apartments is in part viewed as a business decision, tenants consider a broad spectrum of non-business issues, as well. (shrink)
Some philosophers have developed comprehensive interactive models that purport to exhibit the various normative constraints that agents need to adopt in order to achieve what otherwise would be an unattainable and unsustainable social order. Robert Brandom’s semantic inferentialism purports to show how a rational construction of social coordination is enacted and maintained through specific mappings that agents make of each other’s commitments (beliefs) and entitlements (justified beliefs). Strongly influenced by Brandom’s account, Joseph Heath reconstructs a number of historically emergent deontic (...) constraints that solve what are otherwise unsolvable game-theoretic problems in the maintenance of the social order. But both accounts omit a sufficient analysis of the way in which individual agents, who comprise the normative order, are effectively addressed by norms when they act. How does an agent, who is facing a unique interactive situation with more than one normative path to choose, make a decision? One solution, attractive to some continental thinkers, is to appeal to an innate irrational component of decision-making that lies outside of rational bounds (e.g., Nietzsche’s will to power or Adorno’s das Hinzutrentende). The model I will defend lies in an existential account of agency that occupies a middle ground between a pure naturalism (where instinct dominates) and a pure regularism, or “normativism” (where reason dominates). The existential model asserts that the given normative field within which an agent operates conditions the formation of the agent’s intention to act but does not determine the effecting of an action as such — whether individual or collective. On this model, the specification of the acting or not acting on the normative intention is determined only retrospectively on the basis of what the agent actually did in a way that is in principle public and observable. Thus the content of the agency can be reconstructed only historically. The embodied character of the agent is what makes the action relatable to the sum of conditions that were co-determinative of the action at the time it occurred. The advantage of this view is that it does not overreach the highly limited access that we have to the inner workings of intentions to act while at the same time providing an account of agency independent of simply the agent’s relation to norms. (shrink)
It is often thought that epistemic relations between experience and belief make it possible for our beliefs to be about or "directed towards" the empirical world. I focus on an influential attempt by John McDowell to defend a view along these lines. According to McDowell, unless experiences are the sorts of things that can be our reasons for holding beliefs, our beliefs would not be "answerable" to the facts they purportedly represent, and so would lack all empirical content. I argue (...) that there is no intelligible conception of what it is for beliefs to be answerable to the facts that supports McDowell's claim that our empirical beliefs must be justified by experience. (shrink)
2. The Contingency and A posteriority Constraint: A formulation of the thesis must make physicalism come out contingent and a posteriori. First, physicalism is a contingent truth, if it is a truth. This means that physicalism could have been false, i.e. there are counterfactual worlds in which physicalism is false, for example, counterfactual worlds in which there are <span class='Hi'>miracle</span>-performing angels. Moreover, if physicalism is true, our knowledge of its truth is a posteriori. This is to say that there (...) are ways the world could turn out to be such that physicalism is false. For example, if there are <span class='Hi'>miracle</span>-performing angels, then physicalism is false. So there are worlds considered as actual in which physicalism is false. For short, call this ‘the a posteriority constraint’.. (shrink)
How should physical entities be characterized? Physicalists, who have most to do with the notion, usually characterize the physical by reference to two components: 1. The physical entities are the entities treated by fundamental physics with the proviso that 2. Physical entities are not fundamentally mental (that is, do not individually possess or bestow mentality) Here I explore the extent to which the appeals to fundamental physics and to the NFM (“no fundamental mentality”) constraint are appropriate for characterizing the (...) physical, especially for purposes of formulating physicalism. Ultimately, I motivate and defend a version of an account incorporating both components: The physics-based NFM account: An entity existing at a world w is physical iff (i) it is treated, approximately accurately, by current or future (in the limit of inquiry, ideal) versions of fundamental physics at w, and (ii) it is not fundamentally mental (that is, does not individually either possess or bestow mentality). (shrink)
The relations among consciousness, brain, behavior, and scientific explanation are explored in the domain of color perception. Current scientific knowledge about color similarity, color composition, dimensional structure, unique colors, and color categories is used to assess Locke.
§1. Metaethics and Explanation Given some perplexing subject-matter or mode of thought, philosophers typically ask metaphysical and epistemological questions. They ask about the nature (if any) of the phenomenon, and they ask and about our knowledge (if any) of it. When it comes to morality, many moral philosophers ask metaphysical questions like the following. Are there moral facts or states of affairs or property instantiations about which we are thinking when we make moral judgements, and which (when we get it (...) right) are the truth-makers of those moral judgements? Or are there no such moral facts (or states of affairs or property instantiations)? Furthermore, if there are such moral facts (or states of affairs or property instantiations), what are they like? Are they in some sense ‘mind-dependent’ or ‘mind-independent’? And how do moral facts (or states of affairs or property instantiations) relate to non-moral or ‘natural’ facts (or states of affairs or property instantiations)? Those are the usual metaphysical questions. The epistemological questions tend be of the following sort. Assuming there are moral facts (or states of affairs or property instantiations), how (if at all) do we know about them? And what (if anything) would make our beliefs about them justified? These two epistemological questions make certain assumptions. One assumption is that our moral beliefs succeed in possessing the positive epistemic properties of being knowledge or being justified. But perhaps our moral beliefs fail to have these positive epistemic characteristics. Another assumption is that moral judgements are beliefs. But perhaps moral judgements are not beliefs at all, but are emotions or desires.1 Or, to make the mental categories broader, maybe moral judgements are ‘non-cognitive’ rather than ‘cognitive’ states. We need to ask: what kind of mental state is forming or holding a moral judgement? This is not really an epistemological question since epistemology is about a value that beliefs can have, not about whether the judgements in question are beliefs rather than other some other kind of mental state.. (shrink)
Positive arguments on behalf of passion are scarce in liberal political theory. Rather, liberal theorists tend to push passion to the margins of their theories of politics, either by ignoring it or by explicitly arguing that passion poses a danger to politics and is best kept out of the public realm. The purpose of this essay is to criticize these marginalizations and to illustrate their roots in impoverished conceptions of passion. Using a richer conception of passion as the desire for (...) an envisioned good, I argue that it is neither possible nor desirable to eliminate passion from politics. Passion should therefore be established as a central category of analysis in political theory alongside other key concerns. Key Words: passion reason politics liberalism eros. (shrink)
David Albert (2000) and Barry Loewer (2007) have argued that the temporal asymmetry of our concept of causal influence or control is grounded in the statistical mechanical assumption of a low-entropy past. In this paper I critically examine Albert's and Loewer's accounts.
Given some perplexing subject-matter or mode of thought, philosophers typically ask metaphysical and epistemological questions. They ask about the nature (if any) of' the phenomenon, and they ask about our knowledge (if any) of it. When it comes to morality, many moral philosophers ask metaphysical questions like the following. Are there moral facts or states of affairs or property instantiations about which we are..
Recent research in the cognitive science of religion suggests that humans intuitively believe that others survive death. In response to this finding, three cognitive theories have been offered to explain this: the simulation constraint theory (Bering, 2002); the imaginative obstacle theory (Nichols, 2007); and terror management theory (Pyszczynski, Rothschild, & Abdollahi, 2008). First, I provide a critical analysis of each of these theories. Second, I argue that these theories, while perhaps explaining why one would believe in his own personal (...) immortality, leave an explanatory gap in that they do not explain why one would intuitively attribute survival of death to others. To fill in the gap, I offer a cognitive theory based on offline social reasoning and social embodiment which provides for the belief in an eternal social realm in which the deceased survive—the afterlife. (shrink)
One of the defining characteristics of Kant’s “critical philosophy” is what has been called the “critique of immediacy” or the rejection of the “myth of the given.” According to the Kantian position, no object can count as an object for a human knower apart from the knower’s own activity or spontaneity. That is, no object can count as an object for a human knower on the basis of the object’s givenness alone. But this gives rise to a problem: how is (...) it possible to accept the Kantian critique of immediacy while also giving an epistemologically adequate account of the constrained or finite character of human knowing (i.e., an account that does not rely on some appeal to what is simply “given”)? This paper examines how this crucial question is addressed (with more or less success) in the “critical philosophies” of Kant, Lonergan, and Fichte. (shrink)
A reading of Kant’s viewpoint on objectivity is suggested that finds inspiration in the second part of the third Critique, on living systems. It develops the idea that the need to articulate the distinction between objectivity and subjectivity only emerges to the extent that something resists the anticipative procedures of a living, actively engaged being. The possibility of objective knowledge, so it is argued, rests on the possibility of developing an adequate orientation in a phenomenal world, i.e., the possibility of (...) actively distinguishing an “outside” from an “inside”—this not on the basis of an a priori principle, but by taking into account the punctual resistances and disappointments that appear within contingent encounters leading to pleasure and displeasure. We consider negation as a constitutive factor in the emergence of this very basic distinction, as well as in more elaborate and complex differentiations between objectivity and subjectivity. (shrink)
A type quantifier F is symmetric iff F ( X, X )( Y ) = F ( Y, Y )( X ). It is shown that quantifiers denoted by irreducible binary determiners in natural languages are both conservative and symmetric and not only conservative.
With his most famous question, the Being-question, the Seinsfrage — a question essentially and not incidentally obliterated by the tradition of philosophic questioning, Heidegger proposes a phenomenology of questioning. This is not counter to the project of philosophy but it calls us to our own experience as questioners, even as those who ask, who can ask 'Why the why.'(1) For Heidegger, 'only because man is in this way, can he and must he, in each case, say, not only yes or (...) no, but essentially yes and no.'. (shrink)
A total of 152 students were asked to respond to a series of causal conditional (“If P then Q”) inferences with major premises for which there was variable access to information contradicting the premises. Half the students were given 12.5 s for each inference, the other half were given 8.5 s. The percentage of accepted inferences was significantly lower when the time was shorter for the MP and MT inferences, but no effect was observed for the AC and DA inferences. (...) Results are interpreted as supporting the idea that inhibition of retrieved information contradicting the premise is necessary to explain reasoning with the MP and MT inferences under logical instructions (Markovits & Barrouillet, 2002). (shrink)
This paper introduces a conjecture that laws of nature may be of different kinds, in particular that there may, in addition to laws which constrain outcomes (C-laws), be laws which empower systems to direct or select outcomes (E-laws) and laws which guide systems in such selections (G-laws). The paper defends this conjecture by suggesting that it is not excluded by anything we know, is plausible, and is potentially of great explanatory power.
For the sentences of languages that contain operators that express the concepts of definiteness and indefiniteness, there is an unavoidable tension between a truth-theoretic semantics that delivers truth conditions for those sentences that capture their propositional contents and any model-theoretic semantics that has a story to tell about how indetifiniteness in a constituent affects the semantic value of sentences which imbed it. But semantic theories of both kinds play essential roles, so the tension needs to be resolved. I argue that (...) it is the truth theory which correctly characterises the notion of truth, per se. When we take into account the considerations required to bring model theory into harmony with truth theory, those considerations undermine the arguments standardly used to motivate supervaluational model theories designed to validate classical logic. But those considerations also show that celebration would be premature for advocates of the most frequently encountered rival approach – many-valued model theory. (shrink)
Andrews et al. present a form of instrumental adaptationism that is designed to test the hypothesis that a given trait is an adaptation. This epistemological commitment aims to make clear statements about behavioural natural kinds. The instrumental logic is sound, but it is the limits of our empirical imagination that can cause problems for theory construction.
Model RB is a model of random constraint satisfaction problems, which exhibits exact satisfiability phase transition and many hard instances, both experimentally and theoretically. Benchmarks based on Model RB have been successfully used by various international algorithm competitions and many research papers. In a previous work, Xu and Li defined two notions called i-constraint assignment tuple and flawed i-constraint assignment tuple to show an exponential resolution complexity of Model RB. These two notions are similar to some kind (...) of consistency in constraint satisfaction problems, but seem different from all kinds of consistency so far known in literatures. In this paper, we explicitly define this kind of consistency, called variable-centered consistency, and show an upper bound on a parameter in Model RB, such that up to this bound the typical instances of Model RB are variable-centered consistent. (shrink)