The systems of patent rights in force in Europe today, both at the level of national law and on the regional level, contain general clauses prohibiting the patenting of inventions whose publication and exploitation would be contrary to “ordre public” or morality. Recent years have brought frequent discussion about limiting the possibility of patent protection for biotechnological inventions for ethical reasons. This is undoubtedly a result of the dynamic development in this field in the last several years. Human genome sequencing, (...) the first successful cloning of mammals, and the progress in human stem cell research present humanity with many new questions of an ethical nature. Directive 98/44 of the European Parliament and of the Council of July 6, 1998, on the Legal Protection of Biotechnological Inventions created a new basis for patent protection in this field of technology. Based on the European experience to now, however, it must be said that patent law is not the right place to legislate the consequences of the morality of an invention. (shrink)
The paper aims to develop an interactional account of illocutionary practice, which results from integrating elements of Millikan's biological model of language within the framework of Austin's theory of speech acts. The proposed account rests on the assumption that the force of an act depends on what counts as its interactional effect or, in other words, on the response that it conventionally invites or attempts to elicit. The discussion is divided into two parts. The first one reconsiders Austin's and Millikan's (...) contributions to the study of linguistic practice. The second part presents the main tenets of the interactional account. In particular, it draws a distinction between primary and secondary conventional patterns of interaction and argues that they make up coherent systems representing different language games or activity types; it is also argued that the proposed account is not subject to the massive ambiguity problem. (shrink)
The aim of the paper is to explore the interrelation between persuasion tactics and properties of speech acts. We investigate two types of arguments ad: ad hominem and ad baculum. We show that with both of these tactics, the structures that play a key role are not inferential, but rather ethotic, i.e., related to the speaker’s character and trust. We use the concepts of illocutionary force and constitutive conditions related to the character or status of the speaker in order to (...) explain the dynamics of these two techniques. In keeping with the research focus of the Polish School of Argumentation, we examine how the pragmatic and rhetorical aspects of the force of ad hominem and ad baculum arguments exploit trust in the speaker’s status to influence the audience’s cognition. (shrink)
The aim of this paper is twofold. First, the author examines Mitchell Green’s account of the expressive power and score-changing function of speech acts; second, he develops an alternative, though also evolutionist approach to explaining these two hallmarks of verbal interaction. After discussing the central tenets of Green’s model, the author draws two distinctions – between externalist and internalist aspects of veracity, and between perlocutionary and illocutionary credibility – and argues that they constitute a natural refinement of Green’s original conceptual (...) framework. Finally, the author uses the refined framework to develop an alternative account of expressing thoughts with words. In particular, he argues that in theorising about expressing thoughts with words – as well as about using language to change context – we should adopt a Millikanian view on what can be called, following Green, acts of communication and an Austinian approach to speech or illocutionary acts. (shrink)
In this paper, I develop a speech-act based account of presumptions. Using a score-keeping model of illocutionary games, I argue that presumptions construed as speech acts can be grouped into three illocutionary act types defined by reference to how they affect the state of a conversation. The paper is organized into two parts. In the first one, I present the score-keeping model of speech act dynamics; in particular, I distinguish between two types of mechanisms—the direct mechanism of illocution and the (...) indirect one of accommodation—that underlie the functioning of illocutionary acts. In the second part, I use the presented model to distinguish between the unilateral act of individual presumption, the point of which is to shift the burden of proof by making the hearer committed to justifying his refusal to endorse the proposition communicated by the speaker, whenever he refuses to endorse it, the bilateral act of joint presumption—‘bilateral’ in that it is performed jointly by at least two conversing agents—the function of which is to confer on the proposition endorsed by the speaker the normative status of jointly recognized though tentative acceptability, and the indirect or back-door act of collective presumption, the purpose of which is to sustain rules and practices to which the conversing agents defer the felicity of their conversational moves. (shrink)
The paper develops a score-keeping model of illocutionary games and uses it to account for mechanisms responsible for creating institutional facts construed as rights and commitments of participants in a dialogue. After introducing the idea of Austinian games—understood as abstract entities representing different levels of the functioning of discourse—the paper defines the main categories of the proposed model: interactional negotiation, illocutionary score, appropriateness rules and kinematics rules. Finally, it discusses the phenomenon of accommodation as it occurs in illocutionary games and (...) argues that the proposed model presupposes an externalist account of illocutionary practice. (shrink)
The paper develops a non-Gricean account of accommodation: a contextadjusting process guided by the assumption that the speaker’s utterance constitutes an appropriate conversational move. The paper is organized into three parts. The first one reconstructs the basic tenets of Lepore and Stone's non-Gricean model of meaningmaking, which results from integrating direct intentionalism and extended semantics. The second part discusses the phenomenon of accommodation as it occurs in conversational practice. The third part uses the tenets of the non-Gricean model of meaning-making (...) to account for the discursive mechanisms underlying accommodation; the proposed account relies on a distinction between the rules of appropriateness, which form part of extendedgrammar, and the Maxim of Appropriateness, which functions as a discursive norm guiding our conversational practice. (shrink)
The aim of this paper is to reformulate the Linguistic Underdeterminacy Thesis by making use of Austin’s theory of speech acts. Viewed from the post-Gricean perspective, linguistic underdeterminacy consists in there being a gap between the encoded meaning of a sentence uttered by a speaker and the proposition that she communicates. According to the Austinian model offered in this paper, linguistic underdeterminacy should be analysed in terms of semantic and force potentials conventionally associated with the lexical and syntactic properties of (...) the pheme uttered by the speaker; in short, it is claimed that the conventionally specified phatic meaning of an utterance underdetermines its content and force. This Austinian version of the Linguistic Underdeterminacy Thesis plays a central role in a contextualist model of verbal communication. The model is eliminativist with respect to rhetic content and illocutionary force: it takes contents and forces to be one-off constructions whose function is to classify individual utterances in terms of their representational and institutional effects, respectively. (shrink)
The purpose of this paper is twofold. First, it aims at providing an account of an indirect mechanism responsible for establishing one's power to issue biding directive acts; second, it is intended as a case for an externalist account of illocutionary interaction. The mechanism in question is akin to what David Lewis calls presupposition accommodation: a rule-governed process whereby the context of an utterance is adjusted to make the utterance acceptable; the main idea behind the proposed account is that the (...) indirect power-establishing mechanism involves the use of imperative sentences that function as presupposition triggers and as such can trigger off the accommodating change of the context of their utterance. According to the externalist account of illocutionary interaction, in turn, at least in some cases the illocutionary force of an act is determined by the audience's uptake rather than by what the speaker intends or believes; in particular, at least in some cases it is the speaker, not her audience, who is invited to accommodate the presupposition of her act. The paper has three parts. The first one defines a few terms — i.e., an “illocution”, a “biding act”, the “audience's uptake” and an “Austinian presupposition” — thereby setting the stage for the subsequent discussion. The second part formulates and discusses the main problem of the present paper: what is the source of the agent's power to perform binding directive acts? The third part offers an account of the indirect power-establishing mechanism and discusses its externalist implications. (shrink)
The paper reconstructs and discusses three different approaches to the study of speech acts: (i) the intentionalist approach, according to which most illocutionary acts are to be analysed as utterances made with the Gricean communicative intentions, (ii) the institutionalist approach, which is based on the idea of illocutions as institutional acts constituted by systems of collectively accepted rules, and (iii) the interactionalist approach the main tenet of which is to perform illocutionary acts by making conventional moves in accordance with patterns (...) of social interaction. It is claimed that, first, each of the discussed approaches presupposes a different account of the nature and structure of illocutionary acts, and, second, all those approaches result from one-sided interpretations of Austin’s conception of verbal action. The first part of the paper reconstructs Austin's views on the functions and effects of felicitous illocutionary acts. Thesecond part reconstructs and considers three different research developments in the post-Austinian speech act theory—the intentionalist approach, the institutionalist approach, and the interactionalist approach. (shrink)
The aim of this paper is to resist four arguments, originally developed by Mark Siebel, that seem to support scepticism about reflexive communicative intentions. I argue, first, that despite their complexity reflexive intentions are thinkable mental representations. To justify this claim, I offer an account of the cognitive mechanism that is capable of producing an intention whose content refers to the intention itself. Second, I claim that reflexive intentions can be individuated in terms of their contents. Third, I argue that (...) the explanatory power of the theory of illocutionary reflexive intentions is not as limited as it would initially seem. Finally, I reject the suggestion that the conception of reflexive communicative intentions ascribes to a language user more cognitive abilities than he or she really has. (shrink)
This article is devoted to the problem of ontological foundations of three-dimensional Euclidean geometry. Starting from Bertrand Russell’s intuitions concerning the sensual world we try to show that it is possible to build a foundation for pure geometry by means of the so called regions of space. It is not our intention to present mathematically developed theory, but rather demonstrate basic assumptions, tools and techniques that are used in construction of systems of point-free geometry and topology by means of mereology (...) and Whitehead-like connection structures. We list and briefly analyze axioms for mereological structures, as well as those for connection structures. We argue that mereology is a good tool to model so called spatial relations. We also try to justify our choice of axioms for connection relation. Finally, we briefly discuss two theories: Grzegorczyk’s point-free topology and Tarski’s geometry of solids. (shrink)
Legal probabilism is the view that juridical fact-finding should be modeled using Bayesian methods. One of the alternatives to it is the narration view, according to which instead we should conceptualize the process in terms of competing narrations of what happened. The goal of this paper is to develop a reconciliatory account, on which the narration view is construed from the Bayesian perspective within the framework of formal Bayesian epistemology.
This is the first, out of two papers, devoted to Andrzej Grzegorczyk’s point-free system of topology from Grzegorczyk :228–235, 1960. https://doi.org/10.1007/BF00485101). His system was one of the very first fully fledged axiomatizations of topology based on the notions of region, parthood and separation. Its peculiar and interesting feature is the definition of point, whose intention is to grasp our geometrical intuitions of points as systems of shrinking regions of space. In this part we analyze separation structures and Grzegorczyk structures, and (...) establish their properties which will be useful in the sequel. We prove that in the class of Urysohn spaces with countable chain condition, to every topologically interpreted representative of a point in the sense of Grzegorczyk’s corresponds exactly one point of a space. We also demonstrate that Tychonoff first-countable spaces give rise to complete Grzegorczyk structures. The results established below will be used in the second part devoted to points and topological spaces. (shrink)
With material on his early philosophical views, his contributions to set theory and his work on nominalism and higher-order quantification, this book offers a uniquely expansive critical commentary on one of analytical philosophy’s great ...
Graham Priest claims that Asian philosophy is going to constitute one of the most important aspects in 21st-century philosophical research. Assuming that this statement is true, it leads to a methodological question whether the dominant comparative and contrastive approaches will be supplanted by a more unifying methodology that works across different philosophical traditions. In this article, I concentrate on the use of empirical evidence from nonphilosophical disciplines, which enjoys popularity among many Western philosophers, and examine the application of this approach (...) to early Chinese philosophy. I specifically focus on Confucian ethics and the study of altruism in experimental psychology. (shrink)
As it is indicated in the title, this paper is devoted to the problem of defining mereological (collective) sets. Starting from basic properties of sets in mathematics and differences between them and so called conglomerates in Section 1, we go on to explicate informally in Section 2 what it means to join many objects into a single entity from point of view of mereology, the theory of part of (parthood) relation. In Section 3 we present and motivate basic axioms for (...) part of relation and we point to their most fundamental consequences. Next three sections are devoted to formal explication of the notion of mereological set (collective set) in terms of sums, fusions and aggregates. We do not give proofs of all theorems. Some of them are complicated and their presentation would divert the reader’s attention from the main topic of the paper. In such cases we indicate where the proofs can be found and analyzed by those who are interested. (shrink)
The goal is to sketch a nominalist approach to mathematics which just like neologicism employs abstraction principles, but unlike neologicism is not committed to the idea that mathematical objects exist and does not insist that abstraction principles establish the reference of abstract terms. It is well-known that neologicism runs into certain philosophical problems and faces the technical difficulty of finding appropriate acceptability criteria for abstraction principles. I will argue that a modal and iterative nominalist approach to abstraction principles circumvents those (...) difficulties while still being able to put abstraction principles to a foundational use. (shrink)
In this paper we give probably an exhaustive analysis of the geometry of solids which was sketched by Tarski in his short paper [20, 21]. We show that in order to prove theorems stated in [20, 21] one must enrich Tarski's theory with a new postulate asserting that the universe of discourse of the geometry of solids coincides with arbitrary mereological sums of balls, i.e., with solids. We show that once having adopted such a solution Tarski's Postulate 4 can be (...) omitted, together with its versions 4' and 4". We also prove that the equivalence of postulates 4, 4' and 4" is not provable in any theory whose domain contains objects other than solids. Moreover, we show that the concentricity relation as defined by Tarski must be transitive in the largest class of structures satisfying Tarski's axioms. We build a model (in three-dimensional Euclidean space) of the theory of so called T*-structures and present the proof of the fact that this is the only (up to isomorphism) model of this theory. Moreover, we propose different categorical axiomatizations of the geometry of solids. In the final part of the paper we answer the question concerning the logical status (within the theory of T*-structures) of the definition of the concentricity relation given by Tarski. (shrink)
One of the standard views on plural quantification is that its use commits one to the existence of abstract objects–sets. On this view claims like ‘some logicians admire only each other’ involve ineliminable quantification over subsets of a salient domain. The main motivation for this view is that plural quantification has to be given some sort of semantics, and among the two main candidates—substitutional and set-theoretic—only the latter can provide the language of plurals with the desired expressive power (given that (...) the nominalist seems committed to the assumption that there can be at most countably many names). To counter this approach I develop a modal-substitutional semantics of plural quantification (on which plural variables, roughly speaking, range over ways names could be) and argue for its nominalistic acceptability. (shrink)
The essay reviews references to Immanuel Kant’s transcendental philosophy in the work of Helmuth Plessner. First discussed is the Krisis der transzendentalen Wahrheit im Anfang, in which Plessner effects a critique of the transcendental method and shows that overcoming its crisis requires philosophy to rigorously restrict the applicability of theory to the experimental sphere and put it up for judgment by the tribunal of practical reason. Next under scrutiny is Plessner’s programmatic text in philosophical anthropology, in which he strives to (...) employ Kant’s deductive method for the construction of his own system of organic forms. (shrink)
Recently predominant forms of anti-realism claim that all truths are knowable. We argue that in a logical explanation of the notion of knowability more attention should be paid to its epistemic part. Especially very useful in such explanation are notions of group knowledge. In this paper we examine mainly the notion of distributed knowability and show its effectiveness in the case of Fitch's paradox. Proposed approach raised some philosophical questions to which we try to find responses. We also show how (...) we can combine our point of view on Fitch's paradox with the others. Next we give an answer to the question: is distributed knowability factive? At the end, we present some details concerning a construction of anti-realist modal epistemic logic. (shrink)
This study examines the methods students use to cheat on class examinations and suggests ways of deterring using an international sample from Australia, China, Ireland, and the United States. We also examine the level of cheating and reasons for cheating that prior research has highlighted as a method of demonstrating that our sample is equivalent to those in prior studies. Our results confirm the results of prior research that primarily employs students from the United States. The data indicate that actions (...) such as having multiple versions of the examination and scrambling the questions on these versions would deter cheating. In addition, given the increased level of cheating and students' increased perception of the social acceptability of cheating in college, the data provided by our international sample also suggest that some relatively simple precautions by instructors could dramatically reduce the level of cheating on in-class examinations. (shrink)
Building on our diverse research traditions in the study of reasoning, language and communication, the Polish School of Argumentation integrates various disciplines and institutions across Poland in which scholars are dedicated to understanding the phenomenon of the force of argument. Our primary goal is to craft a methodological programme and establish organisational infrastructure: this is the first key step in facilitating and fostering our research movement, which joins people with a common research focus, complementary skills and an enthusiasm to work (...) together. This statement—the Manifesto—lays the foundations for the research programme of the Polish School of Argumentation. (shrink)
Brown (The laboratory of the mind. Thought experiments in the natural science, 1991a , 1991b ; Contemporary debates in philosophy of science, 2004 ; Thought experiments, 2008 ) argues that thought experiments (TE) in science cannot be arguments and cannot even be represented by arguments. He rest his case on examples of TEs which proceed through a contradiction to reach a positive resolution (Brown calls such TEs “platonic”). This, supposedly, makes it impossible to represent them as arguments for logical reasons: (...) there is no logic that can adequately model such phenomena. (Brown further argues that this being the case, “platonic” TEs provide us with irreducible insight into the abstract realm of laws of nature). I argue against this approach by describing how “platonic” TEs can be modeled within the logical framework of adaptive proofs for prioritized consequence operations. To show how this mundane apparatus works, I use it to reconstruct one of the key examples used by Brown, Galileo’s TE involving falling bodies. (shrink)
Safety and care for the natural environment are two of the most important values that drive scientific enterprise in twentieth century. Researchers and innovators often develop new technologies aimed at pollution reduction, and therefore satisfy the strive for fulfilment of these values. This work is often incentivized by policy makers. According to EU directive 2006/40/EC on mobile air conditioning since 2013 all newly approved vehicles have to be filled with refrigerant with low global warming potential. Extensive and expensive research financed (...) by leading car manufacturers led to invention of R-1234yf refrigerant with GWP < 1, which was huge improvement. For the proper understanding of this case it will be useful to refer it to the idea of responsible innovation, which is now being developed and quickly attracts attention. I proceed in the following order. Firstly, I present the relevant properties of R-1234yf and discuss the controversy associated with its marketing. Secondly, I examine framework for responsible innovation. In greater detail I discuss the notions of care for future generations and collective responsibility. Thirdly, I apply the offered framework to the case study at hand. Finally, I draw some conclusions which go in two directions: one is to make some suggestions for improving the framework of RI, and the second is to identify missed opportunities for developing truly responsible refrigerant. (shrink)
We investigate what happens when ‘truth’ is replaced with ‘provability’ in Yablo’s paradox. By diagonalization, appropriate sequences of sentences can be constructed. Such sequences contain no sentence decided by the background consistent and sufficiently strong arithmetical theory. If the provability predicate satisfies the derivability conditions, each such sentence is provably equivalent to the consistency statement and to the Gödel sentence. Thus each two such sentences are provably equivalent to each other. The same holds for the arithmetization of the existential Yablo (...) paradox. We also look at a formulation which employs Rosser’s provability predicate. (shrink)
A theory of definitions which places the eliminability and conservativeness requirements on definitions is usually called the standard theory. We examine a persistent myth which credits this theory to Le?niewski, a Polish logician. After a brief survey of its origins, we show that the myth is highly dubious. First, no place in Le?niewski's published or unpublished work is known where the standard conditions are discussed. Second, Le?niewski's own logical theories allow for creative definitions. Third, Le?niewski's celebrated ?rules of definition? lay (...) merely syntactical restrictions on the form of definitions: they do not provide definitions with such meta-theoretical requirements as eliminability or conservativeness. On the positive side, we point out that among the Polish logicians, in the 1920s and 1930s, a study of these meta-theoretical conditions is more readily found in the works of ?ukasiewicz and Ajdukiewicz. (shrink)
The main focus of this paper is to develop an adaptive formal apparatus capable of capturing (certain types of) reasoning conducted within the framework of the so-called dynamic conceptual frames. I first explain one of the most recent theories of concepts developed by cognitivists, in which a crucial part is played by the notion of a dynamic frame. Next, I describe how a dynamic frame may be captured by a finite set of first-order formulas and how a formalized adaptive framework (...) for reasoning within a dynamic frame can be developed. (shrink)
Sobocinski in his paper on Leśniewski's solution to Russell's paradox (1949b) argued that Leśniewski has succeeded in explaining it away. The general strategy of this alleged explanation is presented. The key element of this attempt is the distinction between the collective (mereological) and the distributive (set-theoretic) understanding of the set. The mereological part of the solution, although correct, is likely to fall short of providing foundations of mathematics. I argue that the remaining part of the solution which suggests a specific (...) reading of the distributive interpretation is unacceptable. It follows from it that every individual is an element of every individual. Finally, another Leśniewskian-style approach which uses so-called higher-order epsilon connectives is used and its weakness is indicated. (shrink)
Advances in analytical understanding of the biosphere’s biogeochemical cycles have spawned concepts of Gaia and noosphere. Earlier in this century, in concert with the Jesuit paleontologist Pierre Teilhard de Chardin, the natural scientist Vladimir Vernadsky developed the notion of noosphere-an evolving collective human consciousness on Earth exerting an ever increasing intluence on biogeochemical processes. More recently, the chemist James Lovelock postulated the Earth to be a self-regulating system made up of biota and their environment with the capacity to maintain a (...) planetary steady state favorable to life. This is the Gaia hypothesis. To many, Gaia and noosphere represent contradictory interpretations of humanity’s relation to planetary ecology. Noosphere emphasizes a free will and obligation to shape the destiny of humanity on Earth through technology and new kinds of social relations. In contrast, Gaia invokes mysterious mechanisms of planetary evolution that lie beyond human control and understanding. I argue that if brought together, noosphere and Gaia can provide a useful symbol for guiding human interventions in global ecology because the contradictions of a nature-centered view of Gaia and a human-centered view of noosphere are coming to be irrelevant with the emergence of an analytical science of the biosphere. (shrink)
It is a commonplace remark that the identity relation, even though not expressible in a first-order language without identity with classical set-theoretic semantics, can be defined in a language without identity, as soon as we admit second-order, set-theoretically interpreted quantifiers binding predicate variables that range over all subsets of the domain. However, there are fairly simple and intuitive higher-order languages with set-theoretic semantics in which the identity relation is not definable. The point is that the definability of identity in higher-order (...) languages not only depends on what variables range over, but also is sensitive to how predication is construed. This paper is a follow-up to , where it has been proven that no actual axiomatization of Leśniewski’s Ontology determines the standard semantics for the epsilon connective. (shrink)
Salmon and Soames argue against nominalism about numbers and sentence types. They employ (respectively) higher-order and first-order logic to model certain natural language inferences and claim that the natural language conclusions carry commitment to abstract objects, partially because their renderings in those formal systems seem to do that. I argue that this strategy fails because the nominalist can accept those natural language consequences, provide them with plausible and non-committing truth conditions and account for the inferences made without committing themselves to (...) abstract objects. I sketch a modal account of higher-order quantification, on which instead of ranging over sets, higher order quantifiers are used to make (logical) possibility claims about which predicate tokens can be introduced. This approach provides an alternative account of truth conditions for natural language sentences which seem to employ higher-order quantification, thus allowing the nominalist to evade Salmon’s argument. I also show how the nominalist can account for the occurrence of apparently singular abstract terms in certain true statements. I argue that the nominalist can achieve this by, first, dividing singular terms into real singular terms (referring to concrete objects) and only apparent singular terms (called onomatoids), introduced for the sake of brevity and simplicity, and then providing an account of nominalistically acceptable truth conditions of sentences containing onomatoids. I develop such an account in terms of modally interpreted abstraction principles and argue that applying this strategy to Soames’s argument allows the nominalists to defend themselves. (shrink)
Richard Swinburne (Swinburne and Shoemaker 1984; Swinburne 1986) argues that human beings currently alive have non{bodily immaterial parts called souls. In his main argument in support of this conclusion (modal argument), roughly speaking, from the assumption that it is logically possible that a human being survives the destruction of their body and a few additional premises, he infers the actual existence of souls. After a brief presentation of the argument we describe the main known objection to it, called the substitution (...) objection (SO for short), which is raised by Alston and Smythe (1994), Zimmerman (1991) and Stump and Kretzmann (1996). We then explain Swinburne's response to it (1996). This constitutes a background for the discussion that follows. First, we formalize Swinburne's argument in a quantified propositional modal language so that it is logically valid and contains no tacit assumptions, clearing up some notational issues as we go. Having done that, we explain why we find Swinburne's response unsatisfactory. Next, we indicate that even though SO is quite compelling (albeit for a slightly different reason than the one given previously in the literature), a weakening of one of the premises yields a valid argument for the same conclusion and yet immune to SO. Even this version of the argument, we argue, is epistemically circular. (shrink)
In this paper I consider the idea of external language and examine the role it plays in our understanding of human linguistic practice. Following Michael Devitt, I assume that the subject matter of a linguistic theory is not a psychologically real computational module, but a semiotic system of physical entities equipped with linguistic properties. 2 What are the physical items that count as linguistic tokens and in virtue of what do they possess phonetic, syntactic and semantic properties? According to Devitt, (...) the entities in question are particular bursts of sound or bits of ink that count as standard linguistic entities3 — that is, strings of phonemes, sequences of words and sentences — in virtue of the conventional rules that constitute the structure of the linguistic reality. In my view, however, the bearers of linguistic properties should rather be understood as complex physical states of affairs — that I call, following Ruth G. Millikan, complete linguistic signs4 — within which one can single out their narrow and wide components, that is, (0 sounds or inscriptions produced by the speaker and (if) salient aspects of the context of their production. Moreover, I do not share Devitt's view on the nature of linguistic properties. Even though I maintain the general idea of convention-based semantics — according to which semantic properties of linguistic tokens are essentially conventional — I reject the Lewisian robust account of conventionality. Following Millikan, I assume that language conventions involve neither regular conformity nor mutual understanding. (shrink)
My aim in this paper is to defend the view that the processes underlying early vision are informationally encapsulated. Following Marr (1982) and Pylyshyn (1999) I take early vision to be a cognitive process that takes sensory information as its input and produces the so-called primal sketches or shallow visual outputs: informational states that represent visual objects in terms of their shape, location, size, colour and luminosity. Recently, some researchers (Schirillo 1999, Macpherson 2012) have attempted to undermine the idea of (...) the informational encapsulation of early vision by referring to experiments that seem to show that colour recognition is affected by the subject's beliefs about the typical colour of objects. In my view, however, one can reconcile the results of these experiments with the position that early vision is informationally encapsulated. Namely, I put fort a hypothesis according to which the early vision system has access to a local database that I call the mental palette and define as a network of associative links whose nodes stands for shapes and colours. The function of the palette is to facilitate colour recognition without employing central processes. I also describe two experiments by which the mental palette hypothesis can be tested. (shrink)