Bas van Fraassen maintains that the actual function of optical instruments is producing images. Still, the output of a telescope is different from that of a microscope, for in the latter case it is not possible to empirically investigate the geometrical relations between the observer, the image and the detected entity, while in the former it is - at least in principle. In this paper I argue that this is a weak argument to support the belief in the existence of (...) exoplanets that, according to van Fraassen, comes with accepting a theory that posits these entities. If a constructive empiricist asserts the empirical adequacy of such a theory, she might be relying on typical realist arguments, instead - of the very same ilk as the ones used to defend the veridicality of microscopic images. Perhaps the time has come for van Fraassen to explain his view on telescopes. (shrink)
Discutindo acerca das centenas de detecções de planetas extrassolares, que supostamente aconteceram desde 1989 e que ele considera (incorretamente) como instâncias de observações, Peter Kosso disse, justamente, que segundo os parâmetros de Bas van Fraassen esses objetos celestes seriam observáveis. Ora, tais astros poderiam sem dúvida ser observados diretamente (sem a necessidade de instrumentos), nas condições apropriadas. Mas, acrescenta Kosso, “esse tipo de epistemologia externalista, que permite que a justificação se baseie em informação que não temos a disposição (nós não (...) estamos em condição de ver planetas extrassolares a olho nu [atualmente]), não ajuda a decidir quais particulares afirmações científicas garantem a crença” (Kosso 2006, 225, nota 1). Van Fraassen provavelmente incorre em uma petição de princípio não muito diferente daquelas que ele muitas vezes desmascarou em textos de autores realistas, quando apela para o fato de o objeto detectado ser observável, para atribuir o estatuto de ‘observação’ a uma certa detecção. O ponto é, mais uma vez, entender o que vale como observação, particularmente quando isso envolve o uso de algum instrumento. Endossar aquilo que Otávio Bueno chama de ‘padrão internalista’ parece garantir solidez à qualificação de determinadas ações como observações. Uma das principais consequências dessa adoção é que tal critério, não operando nenhuma distinção relevante entre detecções ‘diretas’ e detecções mediadas por instrumentos, permite que a linha que separa observáveis e inobserváveis seja traçada de maneira diferente de como van Fraassen acredita que deveria ser, abarcando, entre os observáveis, mais (tipos de) entidades do que esse parece estar disposto a admitir. (shrink)
In order to defend his controversial claim that observation is unaided perception, Bas van Fraassen, the originator of constructive empiricism, suggested that, for all we know, the images produced by a microscope could be in a situation analogous to that of the rainbows, which are ‘images of nothing’. He added that reflections in the water, rainbows, and the like are ‘public hallucinations’, but it is not clear whether this constitutes an ontological category apart or an empty set. In this paper (...) an argument will be put forward to the effect that rainbows can be thought of as events, that is, as part of a subcategory of entities that van Fraassen has always considered legitimate phenomena. I argue that rainbows are actually not images in the relevant (representational) sense and that there is no need to ontologically inflate the category of entities in order to account for them, which would run counter to the empiricist principle of parsimony. (shrink)
This is the Portuguese translation of Kathleen Okruhlik's paper "Bas van Fraassen’s Philosophy of Science and His Epistemic Voluntarism" (2014) -/- Bas van Fraassen’s anti-realist account of science has played a major role in shaping recent philosophy of science. His constructive empiricism, in particular, has been widely discussed and criticized in the journal literature and is a standard topic in philosophy of science course curricula. Other aspects of his empiricism are less well known, including his empiricist account of scientific laws, (...) his relatively recent re-evaluation of what it is to be an empiricist, and his empiricist structuralism. This essay attempts to provide an overview of these diverse aspects of van Fraassen’s empiricism and to show how they relate to one another. It also focuses on the nature of van Fraassens’s epistemic voluntarism and its relationship to his empiricist philosophy of science. (shrink)
2020 is the year of the fortieth anniversary of Bas van Fraassen’s seminal book The Scientific Image. It is quite surprising, after such a long time, and considering how much the author’s proposal was debated during the last four decades, to find a new review of it on the March issue of Metascience. In “Concluding Unscientific Image”, Hans Halvorson claims that, in the work of the founder of constructive empiricism, not only is there a defense of an anti-realist perspective on (...) science—and, at the same time, a critique of scientific realism—, but also a revolt against the way of doing philosophy that, since Quine, seemed to be hegemonic in analytical philosophy. The present study focuses on Halvorson’s allegations about what maintaining the empirical adequacy of a theory would encompass—and that, according to him, van Fraassen has in mind—and aims at showing that, perhaps, they do not correspond to what van Fraassen actually defends in his book. (shrink)
In their recent “A modest defense of manifestationalism” (2015), Asay and Bordner defend this position from a quite famous criticism put forward by Rosen (1994), according to which while manifestationalism can be seen as more compatible with the letter of empiricism than other popular stances, such as constructive empiricism, it fails nonetheless to make sense of science. The two authors reckon that Rosen’s argument is actually flawed. In their view, manifestationalism could in fact represent a legitimate thesis about the nature (...) of scientific inquiry. In this paper, I will show that Asay and Bordner’s criticisms to Rosen are actually off target. Moreover, they rest upon an understanding of what the aim of science is that might serve to their purposes, but that does not seem to be in line with the scientific enterprise. Perhaps constructive empiricism still represents the best compromise so far presented between strict empiricism and the acknowledgment of the rationality of science. (shrink)
NOTA PRELIMINAR: o texto a seguir representa a versão ampliada (e corrigida conforme as indicações dos pareceristas) do artigo homônimo, publicado na revista Archai em 2014. Por algum problema técnico, acabou sendo publicada, na época, a primeira versão, sem as melhorias sugeridas pelos avaliadores. Eis, então, a versão ‘definitiva’ do artigo “Zenão e a impossibilidade da analogia”: -/- A reductio ad absurdum foi elevada por Zenão de Eléia a único método que permitiria vislumbrar a verdadeira realidade, invisível tanto aos sentidos (...) quanto à maneira de pensar comum. Mostrando uma certa continuidade com os filósofos anteriores, não só na busca de um procedimento para que a especulação pudesse avançar, como também na mesma rota de afastamento daquilo que é mais próximo, conhecido e particular (visível) em direção àquilo que é menos conhecido, distante e universal (invisível), para dizê-lo em termos aristotélicos, Zenão utilizou-se de argumentos aporéticos como único caminho possível para poder entrever o ‘reino do Ser’. Esse, com efeito, é invisível não somente aos nossos sentidos, como também ao nosso raciocínio ordinário. Eis que somente a ‘via do não-ser’, a única que poderia ser trilhada após Parménides, como diz Wolff, nos permite ter uma ideia, por quanto vaga, daquilo que é ‘verdadeiramente invisível’. Tão invisível ao ponto de ser inalcançável até mesmo pelo pensamento. (shrink)
In his famous book "Seeing Dark Things: The Philosophy of Shadows" (2008), Roy Sorensen put forward a ‘blocking theory of shadows’, a causal view on these entities according to which a shadow is an absence of light caused by blockage. This approach allows him to solve a quite famous riddle on shadows, ‘the Yale puzzle’, that was devised by Robert Fogelin in the late 1960s and that Sorensen presents in the form mentioned by Bas van Fraassen (1989). István Aranyosi has (...) recently criticized Sorensen’s solution to the Yale puzzle, on the grounds that it does not resist another version of the riddle, that Aranyosi calls ‘the Bilkent puzzle’. A new perspective on shadows, the ‘Material Exstitution View’, that allegedly permits to solve both puzzles, could be adopted as an alternative. In this paper I will show that Sorensen’s blockage theory can actually handle both the Yale and the Bilkent puzzle, plus another one that I put forward (‘the donut puzzle’), which instead is fatal to Aranyosi’s position. As Sorensen puts it, nothing aside from the original blockage of light is needed. (shrink)
Martin Kusch has recently defended Bas van Fraassen’s controversial view on microscopes, according to which these devices are not ‘windows on an invisible world’, but rather ‘image generators’. The two authors also claim that, since in a microscopic detection it is not possible to empirically investigate the geometrical relations between all the elements involved, one is entitled to maintain an agnostic stance about the reality of the entity allegedly represented by the produced image. In this paper I argue that, contrary (...) to what Kusch maintains, this might not be a neutral way to render scientific evidence. Moreover, a constructive empiricist can support a realist interpretation of microscopic images. In fact, constructive empiricism and van Fraassen’s own anti-realism do not necessarily amount to the same thing. (shrink)
In a recent work published in this journal, “Van Fraassen e a inferência da melhor explicação” (2016), Minikoski and Rodrigues da Silva identify four critical lines proposed by Bas van Fraassen against the form of abductive reasoning known as ‘inference to the best explanation’ (IBE). The first one, put forward by the Dutch philosopher in his seminal book The Scientific Image (1980), concerns the distinction between observable and unobservable entities. Minikoski and Rodrigues da Silva consider that the distinction is of (...) no relevance to the scientific practice. For this reason, they address van Fraassen’s allegations against IBE qua justification of the existence of unobservable entities in a couple of pages and prefer focusing on the other lines they identified. The aim of this work is to pour over the analysis that the two authors perform about van Fraassen’s mentioned argument and some realists’ replies, particularly in the section that Minikoski and Rodrigues da Silva devote to this topic. This will allow us to clarify van Fraassen’s vision on scientific practice and on the ‘immersion in the theoretical world-picture’. The importance and the relevance of the distinction between observables and unobservables will also be reaffirmed. (shrink)
This is a critical review of the book Variational Approach to Gravity Field Theories - From Newton to Einstein and Beyond (2017), written by the Italian astrophysicist Alberto Vecchiato. In his work, Vecchiato shows that physics, as we know it, can be built up from simple mathematical models that become more complex step by step by gradually introducing new principles. The reader is invited to follow the steps that lead from classical physics to relativity and to understand how this happens (...) and why. Moreover, while presenting the variational approach to gravity field theories in a clear and elegant manner, Vecchiato shows a constant worry in leading the readers in such a way that they understand the relevance of what the author considers to be the most powerful technique and unifying concept of theoretical physics - and how it works. Each chapter is enriched with exercises, of which a step-by-step solution is offered, so that the reader can gain a real insight into the topic presented and follow the reading as fruitfully as possible. The most innovative aspect of the book, however, is not the rigorousness and the clarity of the treatment of the variational approach, but rather the point of view that Vecchiato keeps and that the reader is led to endorse too: physics is a human enterprise, prone to error and open to improvement, and not a complete and definite system of truths about our world. (shrink)
Astroparticle physics is an interdisciplinary field embracing astronomy, astrophysics and particle physics. In a recent paper on this topic, Brigitte Falkenburg defended that only scientific realism can make sense of it and that realist beliefs constitute an indispensable methodological principle of research in this discipline. The aim of this work is to show that there exists an anti-realist alternative to this account, along the lines of what Bas van Fraassen showed in his famous book The Scientific Image. Problems and results (...) of astroparticle physics can be understood from an empiricist point of view too, namely that of van Fraassen’s constructive empiricism, which is a more modest and metaphysics-free alternative to scientific realism. Although constructive empiricism can make sense of science no worse than scientific realism does, van Fraassen’s goal is not to demonstrate that his stance is the only viable position, but just that it is not incoherent or proven false by his opponents. In this paper it will be shown that the constructive empiricist stance constitutes a legitimate alternative to scientific realism even when it gets to astroparticle physics and that it does indeed make sense of this new discipline, pace Falkenburg. (shrink)
In his last book (2008), Bas van Fraassen, the originator of constructive empiricism, put forward a table containing a categorization of images. His aim, however, was to discuss the reality of what they represent and not addressing the issue of images per se. One of the consequences is that it remained an open question what ‘public hallucinations’ - reflections in the water, rainbows and the like - are. In this paper it will be defended that only images in the relevant (...) (representational) sense should be considered as such. For this and other reasons, van Fraassen’s diagram should be amended. Moreover, as Physics teaches us, the class of the so-called ‘images’ that are actually objects is wider than van Fraassen reckons. The set of the observable objects do not contain only concrete things, but goes beyond what ‘common sense realism’ suggests. In addition to rocks, oceans and bicycles, we can also see rainbows, reflections in the water and the like. (shrink)
The reductio ad absurdum has been elected by Zeno as the only method permitting to descry the true reality, invisible both to the senses and to the common way of thinking. Showing some continuity with the previous philosophers, not only in the search for a procedure in order to speculation to advance, but also on the same route departing from what is nearer, more acquainted and particular (visible) toward what is less acquainted, more distant and universal (invisible), to say it (...) with aristotelic words, Zeno made use of aporetic arguments as the only possible way to catch a glimpse of the ‘domain of Being’. This, in fact, is invisible not only to our senses, but also to our ordinary reasoning. Therefore, just the ‘way of not-being’, the only one that could be walked after Parmenides, as Wolff says, allows us to have an insight of what is ‘really invisible’. So invisible that it is unreachable also to thought. (shrink)
According to Bas van Fraassen, a postulated entity which can only be detected by means of some instrument should not be considered observable. In this paper I argue that (1) this is not correct; (2) someone can be a constructive empiricist, adhering to van Fraassen’s famous anti-realist position, even admitting that many entities only detectable with a microscope are observable. The case of the paramecium, a very well-known single-celled organism, is particularly instructive in this respect. I maintain that we actually (...) observe paramecia and not just detect them, contrary to what van Fraassen claims. As a matter of fact, even if we can only perceive these protozoans by using a microscope, we are in condition to know that the relevant counterfactual conditions (like the ones Bueno proposed in 2011) are met. Moreover, paramecia satisfy observability and existence criteria proposed by Buekens (1999) and Ghins (2005). But admitting paramecia and the like among the observables does not threaten Constructive Empiricism, for there will always be a line between observables and unobservables on which van Fraassen’s anti-realism can rest. (shrink)
Bas van Fraassen’s antirealist view of science and its aim, constructive empiricism, notoriously rests upon a distinction between observable and unobservable entities. In order to back his empiricist stance, the Dutch philosopher put forward his own characterization of observability. Nonetheless, he acknowledges that the point of constructive empiricism is not lost if the line is drawn in a somewhat different way from how he draws it. This means that other characterizations of observability can support this antirealist stance, provided they allow (...) for a viable distinction between the observable and the unobservable. The aim of this work, however, is not to propose another characterization of observability that fits constructive empiricism, but to put forward a little amendment to van Fraassen’s own antirealism, to the effect that it can actually be seen as a coherent position, albeit controversial, since its present consistency might be called into question. (shrink)
The emphasis on the role of observation, one of the hallmarks of Empiricism, is reaffirmed by the primacy of the distinction between observable and unobservable in Bas van Fraassen’s Constructive Empiricism. In this paper it will be showed that, despite being one the main topics of discussion in contemporary philosophy of science, particularly thanks to van Fraassen, the question of observation and observability is actually so old as philosophy itself and has to do with the willingness, that defines empiricism, to (...) keep ‘within the limits’. (shrink)
The concept of observability is of key importance for a consistent defense of Constructive Empiricism. This anti-realist position, originally presented in 1980 by Bas van Fraassen in his book The Scientific Image, crucially depends on the observable/unobservable dichotomy. Nevertheless, the question of what it means to observe has been faced in an unsatisfactory and inadequate manner by van Fraassen and this represents an important lacuna in his philosophical position. The aim of this work is to propose a characterization of the (...) act of observation able to give the necessary support to the ‘rough guide’ of ‘observable’ that can be found in the aforementioned book. Countering van Fraassen’s own statements, that observability is not a matter for philosophy, but for scientific inquiry only, we maintain that any attempt to deal with this subject by the philosophers is legitimate. We will show that van Fraassen ended up doing a philosophical analysis of observation himself, albeit in a fragmentary way. We believe that this question should be dealt with methodically, though, ‘following the rules’ of a ‘proper’ philosophical analysis, as we attempted to do in this work. We will propose a way of conceiving the act of observation, different from van Fraassen’s one, that can help not only to ground the distinction between observable and unobservable, upon which Constructive Empiricism rests, but to get this anti-realist position closer to scientific practice as well, which is one of its desiderata. Without neglecting the philosophical dimension of the issue, though. However, this proposal does not represent an ad hoc ‘solution’ for Constructive Empiricism, but a characterization aspiring to have a universal reach. (shrink)
The primacy of the act of observation, one of the hallmarks of empiricism, found new life in the centrality of the distinction, made in Bas van Fraassen's constructive empiricism, between observable and unobservable. As Elliott Sober have pointed out, however, it is not clear what van Fraassen understands by observing an object. Worse, the Dutch philosopher does not seem to consider that a clarification of this point is necessary. This, of course, represents an important lacuna in a position generally considered (...) as the main reference for modern empiricism. My goal is to take up again the counterfactual conditionals characterizing perception that Otávio Bueno presented in 2011 in this journal, and also to consider the observability and the existence criteria proposed by Filip Buekens and Michel Ghins, in order to get to a definition of observation that should give van Fraassen's observability concept the support it actually lacks, but without presenting itself as an ad hoc solution. (shrink)
Constructive empiricism is a prominent anti-realist position whose aim is to make sense of science. As is well known, it also crucially depends on the distinction between what is observable and what scientific theories postulate but is unobservable to us. Accordingly, adopting an adequate notion of observability is in order, on pain of failing to achieve the goal of grasping science and its aim. Bas van Fraassen, the originator of constructive empiricism, identifies observation with unaided (at least in principle) human (...) perception. So far, though, he has not put forward any convincing argument to support this (unpopular) stand. He did it on the grounds that it is (allegedly) a matter for empirical investigation and not for philosophical analysis. Nonetheless, he seems to have introduced a criterion for observability that is not the result of any scientific research and is not supported by any scientific theory. Countering his own words, he seems to have instead reflected qua philosopher on how an empiricist should interpret the meaning of the verb ‘to observe’. And then tried to defend his point of view by means of metaphors and analogies. But the very same metaphors and analogies van Fraassen put forward could be used to back up the opposite position. Worse, not only does his criterion counter common sense, it does not work either. Perhaps the time has come for van Fraassen to put forward or endorse alternative criteria of observability. (shrink)
The act of observing is crucial for constructive empiricism, Bas van Fraassen's celebrated position on the aim of science. As Buekens and Muller noted in 2012, the Dutch philosopher should have characterized observation as an intentional act, because observation in science has a purpose. In the present article, which will also address the distinction between observing and observing that, introduced by Hanson and Dretske, it will be shown that considerations about the intentionality of the act of observing are, on the (...) contrary, unnecessary for drawing the distinction between observable and unobservable entities on which constructive empiricism depends. (shrink)
The notion of epistemic community is crucial for the characterization of observability, a cornerstone for Bas van Fraassen’s constructive empiricism. As a matter of fact, observable is, to him, a short for observable-by-us. In this work, it will be shown that the alleged rigidity of the author of The Scientific Image, apparently not very keen to admitting changes in the epistemic community (constituted – according to him – by the human race), is actually an assumption of modesty and good judgment; (...) it means recognizing that scientific enterprise is just a human activity, among many others. (shrink)
In 1985, Alan Musgrave raised a serious objection against the possibility that a constructive empiricist could coherently draw the distinction between observables and unobservables. In his brief response in the same year, Bas van Fraassen claimed that Musgrave’s argument only works within the so-called ‘syntactic view’ of theories, while it loses its force in the context of the ‘semantic view’. But this response was not adequate, or so claimed F. A. Muller, who published two articles in order to extend the (...) epistemology of constructive empiricism. In order to do so, Muller provided a rigorous characterization of observability that requires the use of modal logic. The outcome was a new epistemology for constructive empiricism, which van Fraassen apparently endorsed. As will be shown in this article, however, Muller’s extended epistemology is superfluous. Moreover, and more importantly, Musgrave’s argument seems to be a pseudo-problem. (shrink)
Is there an ontological question relative to van Fraassen’s Constructive Empiricism? It seems so, despite this philosophical position, a reference for contemporary Empiricism, presenting itself as an epistemological thesis. It is, furthermore, a very up-to-date matter, as the Dutch philosopher has recently changed his mind about the possibility for us to observe common optical phenomena as the rainbow. This reveals the necessity for a discussion about the concept of phenomena as used by van Fraassen, as Foss stated more than twenty (...) years ago, but also – and this is an intertwined question – about what ontology is assumed by Constructive Empiricism. (shrink)
The problem of the justification of inductive inferences, also known as ‘Hume’s problem’, seems to have lost strength since the early 20th century, following several authors’ denial that induction is the method of science. Van Fraassen went beyond this denial and recently stated that induction does not exist. It is our aim to show that, in order to bring forward a coherent vision of science, in his reconstruction it is the observable (a crucial term for his Constructive Empiricism) that is (...) logically prior to the act of observing and not the other way round. We called this ‘the reverse image of observation’. (shrink)
Observation and observability represent a crucial topic in the philosophy of science, as the huge production of papers and books on the subject attests. Philosophy of perception, on the other hand, is a field of study that took root effectively in the last decades. Even then, apparently, the main theories on observation have neglected the issue of determining which is the object of a successful perception. As a consequence, some theses that have recently been proposed are actually paradoxical, despite deriving (...) from renowned and, prima facie, satisfactory and complete theories. This is the situation of van Fraassen’s assertions on the (putative?) observation of images and rainbows (see 2001 and 2008) and of Sorensen’s claims on what one actually sees during a solar eclipse (see 2008). After putting forward a possible characterization of the object of perception, with no need of discussing the issue of intentionality, in this paper it will be shown that devoting adequate attention to this topic, together with acknowledging that observation is an action, in which the subject plays an indeed active role, would make it possible to avoid drawing conclusions that do not seem to be correct, such as the ones just mentioned. Any theory about observation will only be complete and adequate provided the object of perception is taken into account. (shrink)
Philosophers working within the pragmatist tradition have pictured their relation to Kant and Kantianism in very diverse terms: some have presented their work as an appropriation and development of Kantian ideas, some have argued that pragmatism is an approach in complete opposition to Kant. This collection investigates the relationship between pragmatism, Kant, and current Kantian approaches to transcendental arguments in a detailed and original way. Chapters highlight pragmatist aspects of Kant’s thought and trace the influence of Kant on the work (...) of pragmatists and neo-pragmatists, engaging with the work of Peirce, James, Lewis, Sellars, Rorty, and Brandom, among others. They also consider to what extent contemporary approaches to transcendental arguments are compatible with a pragmatist standpoint. The book includes contributions from renowned authors working on Kant, pragmatism and contemporary Kantian approaches to philosophy, and provides an authoritative and original perspective on the relationship between pragmatism and Kantianism. (shrink)
In the Transcendental Dialectic of the first Critique, Kant famously claims that even if ideas and principles of reason cannot count as cognitions of objects, they can play a positive role when they are used “regulatively” with the aim of organizing our empirical cognitions. One issue is to understand what assuming “regulatively” means. What kind of attitude does this “assuming” imply? Another issue is to characterize the status of ideas and principles themselves. It is to this second issue that this (...) article is dedicated. Some interpreters have suggested that ideas and principles that can be assumed regulatively consist of propositions that we know are false. Others have suggested that at least some regulative ideas, as for example the idea of the homogeneity of nature, consist of propositions that we know are true but are indeterminate. Still others argue that, in assuming regulative ideas and principles, we assume propositions that cannot be proved true, but are nonetheless possibly true. In this article, I reject the view that regulative ideas consist of true but indeterminate propositions. Moreover, I argue that it is wrong to presuppose that only one of the remaining two options can apply to Kant’s account of regulative ideas and principles. By contrast, I submit that while in some cases assuming regulative ideas and principles does involve assuming some propositions that we know are false, this is not true for all regulative ideas and principles. More specifically, assuming regulative ideas involves assuming false propositions when assuming them means assuming that a “totality of appearances” is given. (shrink)
In his tightly argued, thought-provoking volume Interpretation without Truth, Pierluigi Chiassoni offers a groundbreaking, reductionist account of judicial fictions.1 Under Chiassoni’s view, judici...
Sleep and dreaming are important daily phenomena that are receiving growing attention from both the scientific and the philosophical communities. The increasingly popular predictive brain framework within cognitive science aims to give a full account of all aspects of cognition. The aim of this paper is to critically assess the theoretical advantages of Predictive Processing (PP, as proposed by Clark 2013, Clark 2016; and Hohwy 2013) in defining sleep and dreaming. After a brief introduction, we overview the state of the (...) art at the intersection between dream research and PP (with particular reference to Hobson and Friston 2012; Hobson et al. 2014). In the following sections we focus on two theoretically promising aspects of the research program. First, we consider the explanations of phenomenal consciousness during sleep (i.e. dreaming) and how it arises from the neural work of the brain. PP provides a good picture of the peculiarity of dreaming but it can’t fully address the problem of how consciousness comes to be in the first place. We propose that Integrated Information Theory (IIT) (Oizumi et al. 2014; Tononi et al. 2016) is a good candidate for this role and we will show its advantages and points of contact with PP. After introducing IIT, we deal with the evolutionary function of sleeping and dreaming. We illustrate that PP fits with contemporary researches on the important adaptive function of sleep and we discuss why IIT can account for sleep mentation (i.e. dreaming) in evolutionary terms (Albantakis et al. 2014). In the final section, we discuss two future avenues for dream research that can fruitfully adopt the perspective offered by PP: (a) the role of bodily predictions in the constitution of the sleeping brain activity and the dreaming experience, and (b) the precise role of the difference stages of sleep (REM (Rapid eye movement), NREM (Non-rapid eye movement) in the constitution and refinement of the predictive machinery. (shrink)
Social research, from economics to demography and epidemiology, makes extensive use of statistical models in order to establish causal relations. The question arises as to what guarantees the causal interpretation of such models. In this paper we focus on econometrics and advance the view that causal models are ‘augmented’ statistical models that incorporate important causal information which contributes to their causal interpretation. The primary objective of this paper is to argue that causal claims are established on the basis of a (...) plurality of evidence. We discuss the consequences of ‘evidential pluralism’ in the context of econometric modelling. (shrink)
We investigate an extension of the formalism of interpreted systems by Halpern and colleagues to model the correct behaviour of agents. The semantical model allows for the representation and reasoning about states of correct and incorrect functioning behaviour of the agents, and of the system as a whole. We axiomatise this semantic class by mapping it into a suitable class of Kripke models. The resulting logic, KD45n i-j, is a stronger version of KD, the system often referred to as Standard (...) Deontic Logic. We extend this formal framework to include the standard epistemic notions defined on interpreted systems, and introduce a new doubly-indexed operator representing the knowledge that an agent would have if it operates under the assumption that a group of agents is functioning correctly. We discuss these issues both theoretically and in terms of applications, and present further directions of work. (shrink)
This paper addresses a fundamental line of research in neuroscience: the identification of a putative neural processing core of the cerebral cortex, often claimed to be “canonical”. This “canonical” core would be shared by the entire cortex, and would explain why it is so powerful and diversified in tasks and functions, yet so uniform in architecture. The purpose of this paper is to analyze the search for canonical explanations over the past 40 years, discussing the theoretical frameworks informing this research. (...) It will highlight a bias that, in my opinion, has limited the success of this research project, that of overlooking the dimension of cortical development. The earliest explanation of the cerebral cortex as canonical was attempted by David Marr, deriving putative cortical circuits from general mathematical laws, loosely following a deductive-nomological account. Although Marr’s theory turned out to be incorrect, one of its merits was to have put the issue of cortical circuit development at the top of his agenda. This aspect has been largely neglected in much of the research on canonical models that has followed. Models proposed in the 1980s were conceived as mechanistic. They identified a small number of components that interacted as a basic circuit, with each component defined as a function. More recent models have been presented as idealized canonical computations, distinct from mechanistic explanations, due to the lack of identifiable cortical components. Currently, the entire enterprise of coming up with a single canonical explanation has been criticized as being misguided, and the premise of the uniformity of the cortex has been strongly challenged. This debate is analyzed here. The legacy of the canonical circuit concept is reflected in both positive and negative ways in recent large-scale brain projects, such as the Human Brain Project. One positive aspect is that these projects might achieve the aim of producing detailed simulations of cortical electrical activity, a negative one regards whether they will be able to find ways of simulating how circuits actually develop. (shrink)
The logical hexagon (or hexagon of opposition) is a strange, yet beautiful, highly symmetrical mathematical figure, mysteriously intertwining fundamental logical and geometrical features. It was discovered more or less at the same time (i.e. around 1950), independently, by a few scholars. It is the successor of an equally strange (but mathematically less impressive) structure, the “logical square” (or “square of opposition”), of which it is a much more general and powerful “relative”. The discovery of the former did not raise interest, (...) neither among logicians, nor among philosophers of logic, whereas the latter played a very important theoretical role (both for logic and philosophy) for nearly two thousand years, before falling in disgrace in the first half of the twentieth century: it was, so to say, “sentenced to death” by the so-called analytical philosophers and logicians. Contrary to this, since 2004 a new, unexpected promising branch of mathematics (dealing with “oppositions”) has appeared, “oppositional geometry” (also called “n-opposition theory”, “NOT”), inside which the logical hexagon (as well as its predecessor, the logical square) is only one term of an infinite series of “logical bi-simplexes of dimension m”, itself just one term of the more general infinite series (of series) of the “logical poly-simplexes of dimension m”. In this paper we recall the main historical and the main theoretical elements of these neglected recent discoveries. After proposing some new results, among which the notion of “hybrid logical hexagon”, we show which strong reasons, inside oppositional geometry, make understand that the logical hexagon is in fact a very important and profound mathematical structure, destined to many future fruitful developments and probably bearer of a major epistemological paradigm change. (shrink)
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...) aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed. (shrink)
Whereas geometrical oppositions (logical squares and hexagons) have been so far investigated in many fields of modal logic (both abstract and applied), the oppositional geometrical side of “deontic logic” (the logic of “obligatory”, “forbidden”, “permitted”, . . .) has rather been neglected. Besides the classical “deontic square” (the deontic counterpart of Aristotle’s “logical square”), some interesting attempts have nevertheless been made to deepen the geometrical investigation of the deontic oppositions: Kalinowski (La logique des normes, PUF, Paris, 1972) has proposed a (...) “deontic hexagon” as being the geometrical representation of standard deontic logic, whereas Joerden (jointly with Hruschka, in Archiv für Rechtsund Sozialphilosophie 73:1, 1987), McNamara (Mind 105:419, 1996) and Wessels (Die gute Samariterin. Zur Struktur der Supererogation, Walter de Gruyter, Berlin, 2002) have proposed some new “deontic polygons” for dealing with conservative extensions of standard deontic logic internalising the concept of “supererogation”. Since 2004 a new formal science of the geometrical oppositions inside logic has appeared, that is “ n -opposition theory”, or “NOT”, which relies on the notion of “logical bi-simplex of dimension m ” ( m = n − 1). This theory has received a complete mathematical foundation in 2008, and since then several extensions. In this paper, by using it, we show that in standard deontic logic there are in fact many more oppositional deontic figures than Kalinowski’s unique “hexagon of norms” (more ones, and more complex ones, geometrically speaking: “deontic squares”, “deontic hexagons”, “deontic cubes”, . . ., “deontic tetraicosahedra”, . . .): the real geometry of the oppositions between deontic modalities is composed by the aforementioned structures (squares, hexagons, cubes, . . ., tetraicosahedra and hyper-tetraicosahedra), whose complete mathematical closure happens in fact to be a “deontic 5-dimensional hyper-tetraicosahedron” (an oppositional very regular solid). (shrink)
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...) aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed. (shrink)
This book presents a systematic interpretation of Charles S. Peirce’s work based on a Kantian understanding of his teleological account of thought and inquiry. Departing from readings that contrast Peirce’s treatment of purpose, end, and teleology with his early studies of Kant, Gabriele Gava instead argues that focusing on Peirce’s purposefulness as a necessary regulative condition for inquiry and semiotic processes allows for a transcendental interpretation of Peirce’s philosophical project. The author advances this interpretation through presenting original views on (...) aspects of Peirce’s thought, including: a detailed analysis of Peirce’s ‘methodeutic’ and ‘speculative rhetoric,’ as well as his ‘critical common-sensism’; a comparison between Peirce’s and James’ pragmatisms in view of the account of purposefulness Gava puts forth; and an examination of the logical relationships that order Peirce’s architectonic classification of the sciences. (shrink)
We investigate an extension of the formalism of interpreted systems by Halpern and colleagues to model the correct behaviour of agents. The semantical model allows for the representation and reasoning about states of correct and incorrect functioning behaviour of the agents, and of the system as a whole. We axiomatise this semantic class by mapping it into a suitable class of Kripke models. The resulting logic, $\text{KD}45_{n}^{i-j}$, is a stronger version of KD, the system often referred to as Standard Deontic (...) Logic. We extend this formal framework to include the standard epistemic notions defined on interpreted systems, and introduce a new doubly-indexed operator representing the knowledge that an agent would have if it operates under the assumption that a group of agents is functioning correctly. We discuss these issues both theoretically and in terms of applications, and present further directions of work. (shrink)
This paper portrays the later Wittgenstein’s conception of contradictions and his therapeutic approach to them. I will focus on and give relevance to the Lectures on the Foundations of Mathematics, plus the Remarks on the Foundations of Mathematics. First, I will explain why Wittgenstein’s attitude towards contradictions is rooted in: a rejection of the debate about realism and anti-realism in mathematics; and Wittgenstein’s endorsement of logical pluralism. Then, I will explain Wittgenstein’s therapeutic approach towards contradictions, and why it means that (...) a contradiction is not a problem for logic and mathematics. Rather, contradictions are problematic when we do not know what to infer from them. Once a meaning is established through a new rule of inference, the contradiction becomes a usable expression like many others in our inferential apparatus. Thus, the apparent problem is dissolved. Finally, I will take three examples of dissolved contradictions from Wittgenstein to clarify further his notion. I will conclude considering why his position on contradictions led him to clash with Alan Turing, and whether the latter was convinced by the Wittgensteinian proposal. (shrink)
In this paper, I aim to explore what role persuasion plays in the early education of children. Advocating Wittgenstein, I claim that persuasion involves imparting to a pupil about a particular world-picture by showing rather than explaining. This because we cannot introduce a child to the hinges of a world-picture through a discursive argument. I will employ the remarks of Wittgenstein in On Certainty to define what persuasion is. I will make use of the notes regarding seeing-an-aspect from the Philosophical (...) Investigations to clarify such a notion. Afterwards, I will contextualise this in early-childhood education and conclude providing some examples of how persuasion solidifies hinges. (shrink)
Some Carrollian posthumous manuscripts reveal, in addition to his famous ‘logical diagrams’, two mysterious ‘logical charts’. The first chart, a strange network making out of fourteen logical sentences a large 2D ‘triangle’ containing three smaller ones, has been shown equivalent—modulo the rediscovery of a fourth smaller triangle implicit in Carroll's global picture—to a 3D tetrahedron, the four triangular faces of which are the 3+1 Carrollian complex triangles. As it happens, such an until now very mysterious 3D logical shape—slightly deformed—has been (...) rediscovered, independently from Carroll and much later, by a logician , a mathematician and a linguist studying the geometry of the ‘opposition relations’, that is, the mathematical generalisations of the ‘logical square’. We show that inside what is called equivalently ‘n-opposition theory’, ‘oppositional geometry’ or ‘logical geometry’, Carroll's first chart corresponds exactly, duly reshap.. (shrink)