Ignited by Einstein and Bohr a century ago, the philosophical struggle about Reality is yet unfinished, with no signs of a swift resolution. Despite vast technological progress fueled by the iconic EPR paper (EPR), the intricate link between ontic and epistemic aspects of Quantum Theory (QT) has greatly hindered our grip on Reality and further progress in physical theory. Fallacies concealed by tortuous logical negations made EPR comprehension much harder than it could have been had Einstein written it himself in (...) German. It is plagued with preconceptions about what a physical property is, the 'Uncertainty Principle', and the Principle of Locality. Numerous interpretations of QT vis à vis Reality exist and are keenly disputed. This is the first of a series of articles arguing for a physical interpretation called ‘The Ontic Probability Interpretation’ (TOPI). A gradual explanation of TOPI is given intertwined with a meticulous logico-philosophical scrutiny of EPR. Part I focuses on the meaning of Einstein’s ‘Incompleteness’ claim. A conceptual confusion, a preconception about Reality, and a flawed dichotomy are shown to be severe obstacles for the EPR argument to succeed. Part II analyzes Einstein’s ‘Incompleteness/Nonlocality Dilemma’. Future articles will further explain TOPI, demonstrating its soundness and potential for nurturing theoretical progress. (shrink)
In the social sciences, within the explanatory paradigm of structural individualism, a theory of action – like rational choice theory – models how individuals behave and interact at the micro level in order to explain macro observations as the aggregation of these individuals actions. A central epistemological issue is that such theoretical models are stuck in a dilemma between falsity of their basic assumptions and triviality of their explanation. On the one hand, models which have a great empirical success often (...) rest on unrealistic or even knowingly false assumptions; on the other hand, more complex models, with additional more realistic hypotheses, can (trivially) adapt to a wide range of situations and thus loose their explanatory power. Our purpose here is epistemological and consists in wondering to which extent demanding realistic assumptions in such cases is a relevant criterion with respect to the acceptance of a given explanatory model. Via an analogical reasoning with physics, we argue that this criterion seems too strong and actually irrelevant. General physical principles are not just idealized or unrealistic, they can also be formulated in many different yet equivalent ways which do not imply the same fundamental unobservable entities or phenomena. However, the classification of phenomena that such principles allow to highlight does not depend, at the end, on any particular formulation of these basic assumptions. This suggests that some hypotheses in theoretical models are actually not genuine empirical statements that could be independently tested but only substrates of modeling embodying a classification principle. Thus, we develop a structural invariance criterion that we then apply to rational choice models in the social sciences. We argue that this criterion allows to escape from the epistemological dilemma without condemning formal approaches like rational choice theory for their lack of realisticness nor being stuck to any antirealist viewpoint. (shrink)
Starting with Kant’s undeveloped proposal of a “negative science,” the author describes how philosophy may be developed and strengthened by means of a systematic approach that seeks to identify and eliminate a widespread but seldom recognized form of systemic and propagating conceptual error. ¶¶¶¶¶ -/- The paper builds upon the author’s book, CRITIQUE OF IMPURE REASON: HORIZONS OF POSSIBILITY AND MEANING (Studies in Theory and Behavior, 2021). ¶¶¶¶¶ -/- The author’s purpose is twofold: first, to enable us to recognize the (...) boundaries of what is referentially forbidden—the limits beyond which reference becomes meaningless—and second, to avoid falling victims to a certain broad class of conceptual confusions that lie at the heart of many major philosophical problems. By realizing these objectives, the boundaries of possible meaning are determined. (shrink)
There is a widely accepted distinction between being directly responsible for a wrongdoing versus being somehow indirectly or vicariously responsible for the wrongdoing of another person or collective. Often this is couched in analyses of complicity, and complicity’s role in the relationship between individual and collective wrongdoing. Complicity is important because, inter alia, it allows us to make sense of individuals who may be blameless or blameworthy to a relatively low degree for their immediate conduct, but are nevertheless blameworthy to (...) a higher degree for their implication in some larger (or another person’s) wrongdoing. In this paper, I argue that there is a distinctively epistemic kind of complicity. First, I motivate the distinction between direct and vicarious responsibility with three interlocking arguments, respectively appealing to: i) the structure of rational agency; ii) linguistic considerations; iii) the role of “principal vs. accomplice” in legal doctrine. I show how these arguments naturally extend to the epistemic domain, motivating an epistemic form of vicarious responsibility. I then examine complicity as a mechanism of vicarious epistemic responsibility. To fill this out, I engage with an epistemic analogue of the debate about the role of intention versus causal contribution in complicity. I propose a Casual Account of Epistemic Complicity, arguing that it accommodates a wide range of cases, and enables fine-grained explanations of degrees of culpability for epistemic complicity. With an adequate account of epistemic complicity on hand, we can explain what is objectionable about an important class of epistemic agent who, on an individual level, may be epistemically blameless or blameworthy to a relatively low degree, but whose relation to other individuals or collectives nevertheless makes them epistemically blameworthy to a higher degree. I explore some broader implications of this result. (shrink)
I argue that, despite claims that might be made to the contrary, no scientific evidence could ever prove that introspection is unreliable, even in principle. This paper was read at the annual POH symposium in Lake Wenatchee in May, 2011.
David Miller propounds a theory of objective knowledge from which he mistakenly derives some consequences about question-begging and persuasion that appear to be false. He makes a further claim about persuasion that also seems false. I argue that Miller’s account of objective knowledge is explanatorily weak unless supplemented with an account of subjective knowledge and that the latter enables us to extricate Miller’s theory from the falsehoods he associates with it.
Although at first glance, “facts” are the paradigms of straightforwardness, something about facts seems to invite perpetual controversy and dichotomizing. Innumerable bifurcations on the topic have included "Facts vs. Theories”, “Facts vs. Appearance”, "Facts vs. Values", ... and, popular nowadays, "(Real)Facts vs. Fake Facts". This paper most aligns with the facts vs. theories model, so far as whatever facts are, theories seem to be constructed stories that are necessary for connecting and interpreting the facts. Yet the boundary between the two (...) is fluid and fuzzy: Fact in one context is theory in another, depending on what is being accepted or contested at the time. This paper’s views are compatible with—but neutral on the plausibility of having—an optimism like Peirce’s that scientific inquiry may nonetheless converge towards a consensus. To illustrate the challenges of finding “straightforward” facts, the paper includes a case example, related to research on possible health effects of exposure to electromagnetic-field radiation (EMF). _ The source paper for this 2018 update was originally presented at the University of Waterloo 30th Anniversary Philosophy Conference, 1993. (shrink)
Some metaphysics are provided showing that what is commonly called ‘the physical world’ can be deconstructed into three ‘levels’: a single, unified ‘noumenal world’ on which everything supervenes; a ‘phenomenal world’ that we each privately experience through direct perception of phenomena; and a ‘collective world’ that people in any given ‘language using group’ experience through learning, using and adapting that group’s language. This deconstruction is shown to enable a clear account of qualia and of how people can hold some things (...) to be physically real even when it is clear no one can ever directly experience those things as phenomena. It is further shown to enable a single, internally consistent, largely empirically supported conceptual framework – a ‘metacosmology’ – able to encompass not only the physical world as people conceive of it for everyday purposes, and as scientists conceive of it for scientific purposes, but also people’s first person phenomenal experience of a physical world and, prospectively, the mechanisms by which such first person, conscious experiences can be generated. (shrink)
Intelligence services are currently focusing on the fight against terrorism, leaving relatively little resources to monitor other security threats. For this reason, they often ignore external information activities that do not pose immediate threats to their government's interests. Extremely few external services operate globally. Almost all other services focus on immediate neighbors or regions. These services usually depend on relationships with these global services for information on areas beyond their immediate neighborhoods, and often sell their regional expertise for what they (...) need globally. A feature of both internal and external services is that they behave like a caste. DOI: 10.13140/RG.2.2.25847.68006. (shrink)
Starting from the premise that akrasia is irrational, I argue that it is always a rational mistake to have false beliefs about the requirements of rationality. Using that conclusion, I defend logical omniscience requirements, the claim that one can never have all-things-considered misleading evidence about what's rational, and the Right Reasons position concerning peer disagreement.
The idea that the epistemology of modality is in some sense a priori is a popular one, but it has turned out to be difficult to precisify in a way that does not expose it to decisive counterexamples. The most common precisifications follow Kripke’s suggestion that cases of necessary a posteriori truth that can be known a priori to be necessary if true ‘may give a clue to a general characterization of a posteriori knowledge of necessary truths’. The idea is (...) that whether it is contingent whether p can be known a priori for at least some broad range of sentences ‘p’. Recently, Al Casullo and Jens Kipper have discussed restrictions of such principles to atomic sentences. We show that decisive counterexamples even to such dramatically restricted Kripke-style principles can be constructed using minimal logical resources. We then consider further restrictions, and show that the counterexamples to the original principles can be turned into counterexamples to the further restricted principles. We conclude that, if there are any true restrictions of Kripke-style principles, then they are so weak as to be of little epistemological interest. (shrink)
I propose and defend the conjecture that what explains why Gettiered subjects fail to know is the fact that their justified true belief depends essentially on unknown propositions. The conjecture follows from the plausible principle about inference in general according to which one knows the conclusion of one’s inference only if one knows all the premises it involves essentially.
The chapter develops a taxonomy of views about the epistemic responsibilities of citizens in a democracy. Prominent approaches to epistemic democracy, epistocracy, epistemic libertarianism, and pure proceduralism are examined through the lens of this taxonomy. The primary aim is to explore options for developing an account of the epistemic responsibilities of citizens in a democracy. The chapter also argues that a number of recent attacks on democracy may not adequately register the availability of a minimal approach to the epistemic responsibilities (...) of citizens in a democracy. (shrink)
Philosophy of mind and cognitive science (e.g., Clark and Chalmers 1998; Clark 2010; Palermos 2014) have recently become increasingly receptive tothe hypothesis of extended cognition, according to which external artifacts such as our laptops and smartphones can—under appropriate circumstances—feature as material realisers of a person’s cognitive processes. We argue that, to the extent that the hypothesis of extended cognition is correct, our legal and ethical theorising and practice must be updated, by broadening our conception of personal assault so as to (...) include intentional harm towards gadgets that have been appropriately integrated. We next situate the theoretical case for extended personal assault within the context of some recent ethical and legal cases and close with some critical discussion. (shrink)
Linguistic data are commonly considered a defeasible source of evidence from which it is legitimate to draw philosophical hypotheses and conclusions. Traditionally epistemologists have relied almost exclusively on linguistic data from western languages, with a primary focus on contemporary English. However, in the last two decades there has been an increasing interest in cross-linguistic studies in epistemology. In this entry, we provide a brief overview of cross-linguistic data discussed by contemporary epistemologists and the philosophical debates they have generated.
In this paper I explore the contours of a picture of normative epistemology that speaks centrally to the question of how to inquire rather than just the question of what to believe. What if normative epistemology were expanded to encompass inquiry in full? I argue that while a 'zetetic epistemology' builds on traditional normative epistemology in many appealing ways, it also faces some challenges.
The norm of assertion, to be in force, is a social norm. What is the content of our social norm of assertion? Various linguistic arguments purport to show that to assert is to represent oneself as knowing. But to represent oneself as knowing does not entail that assertion is governed by a knowledge norm. At best these linguistic arguments provide indirect support for a knowledge norm. Furthermore, there are alternative, non-normative explanations for the linguistic data (as in recent work from (...) Van Elswyk). Direct arguments would rely on normative judgments about the permissibility or impermissibility of assertions with and without the speaker knowing the content asserted. John Turri's work experimental results purport to show that our norm of assertion is factive, and probably knowledge. But as a number of recent experimentalists (Kneer, Reuter and Broessel, Marsili and Wiegmann) have shown, Turri's results rely on the problemmatic use of 'should' as in 'Maria should assert that she owns a 1990 watch.' Correcting for this, these experimentalists provide strong evidence that our norm of assertion is not factive. The standard reply to evidence like this from the assertion of unlucky falsehoods is the excuse maneuver. But Marsili and Wiegmann, following Turri's protocol for testing for excuse validation, show that participants who judge that assertions unlucky falsehoods are permissible are not guilty of excuse validation. Given our current understanding of the experimental evidence, knowledge is not our norm of assertion. (shrink)
This is an opinionated guide to the literature on epistemic dilemmas. It discusses seven kinds of situations where epistemic dilemmas appear to arise; dilemmic, dilemmish, and non-dilemmic takes on them; and objections to dilemmic views along with dilemmist’s replies to them.
Epistemology is the study of knowledge. This entry covers epistemology in two parts: one historical, one contemporary. The former provides a brief theological history of epistemology. The latter outlines three categories of contemporary epistemology: traditional epistemology, social epistemology, and formal epistemology, along with corresponding theological questions that arise in each.
Descartes claimed that the Cogito is ‘so firm and sure that all the most extravagant suppositions of the sceptics were incapable of shaking it’. This paper aims to demonstrate that this claim is false by presenting a sceptical scenario for the Cogito. It is argued that the story ‘The Circular Ruins’ by J. L. Borges illustrates that one can doubt one’s own existence and that pace Descartes (and many others) the claim ‘I am, I exist, is necessarily true whenever it (...) is put forward by me or conceived in my mind’ is false. (shrink)
While we often assume that we can only know what is so, it's clear that we often speak as if we know things that aren't strictly speaking true. What should we make of this? Some would argue that we should take this talk as evidence that it's possible to know things that are strictly speaking false when, say, false representations are adequate for our purposes. I shall argue that it would be better on the whole to say (a) that knowledge (...) ascriptions might be false but felicitous when the ascription relates a thinker to a falsehood than to say (b) that the ascriptions are strictly true. First, it seems that the arguments for thinking that we can know falsehoods overgeneralise (e.g., they don't just threaten the factivity of knowledge-ascriptions but also of claims about what's true or what's a fact). Thus, the motivation for abandoning the traditional view is problematic. Second, it seems that the alternative picture inherits the problems that contextualists face in understanding how knowledge might be normative for assertion and belief. (shrink)
This essay examines a particular rhetorical strategy Nietzsche uses to supply prima facie epistemic justification: appeals to intuition. I first investigate what Nietzsche thinks intuitions are, given that he never uses the term ‘intuition’ as we do in contemporary philosophy. I then examine how Nietzsche can simultaneously endorse naturalism and intuitive appeals. I finish by looking at why and how Nietzsche uses appeals to intuition to further his philosophical agenda. Answering these questions should provide a deeper understanding of how Nietzsche (...) does philosophy. (shrink)
The investigation of epistemic virtues, such as curiosity, open-mindedness, intellectual courage and intellectual humility is a growing trend in epistemology. An underexplored question in this context is: what is the relationship between these virtues and other types of virtue, such as moral or prudential virtue? This paper argues that, although there is an intuitive sense in which virtues such as intellectual courage and open-mindedness have something to do with the epistemic domain, on closer inspection it is not clear to what (...) extent they should be understood as genuine epistemic virtues. We draw a distinction between epistemic virtues and virtues with epistemic content and provide reason to believe that the aforementioned virtues are moral virtues with epistemic content rather than bona fide epistemic virtues. The upshot is that there are far fewer epistemic virtues out there than commonly assumed. (shrink)
Evidentialism is the thesis, roughly, that one’s beliefs should fit one’s evidence. The enkratic principle is the thesis, roughly, that one’s beliefs should "line up" with one’s beliefs about which beliefs one ought to have. While both theses have seemed attractive to many, they jointly entail the controversial thesis that self-misleading evidence is impossible. That is to say, if evidentialism and the enkratic principle are both true, one’s evidence cannot support certain false beliefs about which beliefs one’s evidence supports. Recently, (...) a number of epistemologists have challenged the thesis that self-misleading evidence is impossible on the grounds that misleading higher-order evidence does not have the kind of strong and systematic defeating force that would be needed to rule out the possibility of such self-misleading evidence. Here I respond to this challenge by proposing an account of higher-order defeat that does, indeed, render self-misleading evidence impossible. Central to the proposal is the idea that higher-order evidence acquires its normative force by influencing which conditional beliefs it is rational to have. What emerges, I argue, is an independently plausible view of higher-order evidence, which has the additional benefit of allowing us to reconcile evidentialism with the enkratic principle. (shrink)
Many theories of rational belief give a special place to logic. They say that an ideally rational agent would never be uncertain about logical facts. In short: they say that ideal rationality requires "logical omniscience." Here I argue against the view that ideal rationality requires logical omniscience on the grounds that the requirement of logical omniscience can come into conflict with the requirement to proportion one’s beliefs to the evidence. I proceed in two steps. First, I rehearse an influential line (...) of argument from the "higher-order evidence" debate, which purports to show that it would be dogmatic, even for a cognitively infallible agent, to refuse to revise her beliefs about logical matters in response to evidence indicating that those beliefs are irrational. Second, I defend this "anti-dogmatism" argument against two responses put forth by Declan Smithies and David Christensen. Against Smithies’ response, I argue that it leads to irrational self-ascriptions of epistemic luck, and that it obscures the distinction between propositional and doxastic justification. Against Christensen’s response, I argue that it clashes with one of two attractive deontic principles, and that it is extensionally inadequate. Taken together, these criticisms will suggest that the connection between logic and rationality cannot be what it is standardly taken to be—ideal rationality does not require logical omniscience. (shrink)
_ Source: _Page Count 30 GER Lloyd discerns two conflicting hypotheses concerning human cognition: cross-cultural universality and cultural relativity. The history of science is one discipline among many actively contributing to our understanding of human cognition at present. Not surprisingly, then, the dichotomy is also present in the history of science. In contrast to current approaches to the history of science, which highlight cultural relativity, genetic epistemology, which is conceived by Jean Piaget as a science of the acquisition of knowledge, (...) emphasises cross-cultural universality. Using the multidimensionality of phenomena and different styles of inquiry, I will argue that there is no inherent contradiction in the different approaches to the history of science. However, the amicable co-existence of both approaches has been undermined by Peter Damerow, who criticised the applicability of genetic epistemology to the historical development of knowledge on historical and theoretical grounds. In this paper, I will review Damerow’s theoretical critique, and, in formulating a response, I will argue that genetic epistemology cannot be dismissed as one of many legitimate styles of inquiry into the history of science on the basis of this critique. (shrink)
Even in areas of philosophy of science that don’t involve formal treatments of truth, one’s background view of truth still centrally shapes views on other issues. I offer an informal way to think about truth as trueing, like trueing a bicycle wheel. This holist approach to truth provides a way to discuss knowledge products like models in terms of how well-trued they are to their target. Trueing emphasizes: the process by which models are brought into true; how the idealizations in (...) models are not false but rather like spokes in appropriate tension to achieve a better-trued fit to target; and that this process is not accomplished once and done forever, but instead requires upkeep and ongoing fine-tuning. I conclude by emphasizing the social importance of being a pragmatist about truth in order to accurately answer questions about science such as, “but do we really know that…”. (shrink)
Hinge epistemology is a family of views that offers a novel approach to avoiding skeptical conclusions about the possibility of a posteriori justification of our empirical beliefs. They claim that at the basis of our empirical beliefs lie certain commitments whose rational status is not determined by our evidence. These are called hinge commitments. Prominent hinge epistemologists have claimed that hinge commitments are either rational or arational but yet not beliefs. I argue that such views are subject to decisive objections. (...) I then offer what I consider to be the best version of hinge epistemology. On this view, hinge commitments are best understood as arational beliefs that contingently inform our worldview. I call this view the Arational Beliefs View. (shrink)
Predictable polarization is everywhere: we can often predict how people’s opinions, including our own, will shift over time. Extant theories either neglect the fact that we can predict our own polarization, or explain it through irrational mechanisms. They needn’t. Empirical studies suggest that polarization is predictable when evidence is ambiguous, that is, when the rational response is not obvious. I show how Bayesians should model such ambiguity and then prove that—assuming rational updates are those which obey the value of evidence—ambiguity (...) is necessary and sufficient for the rationality of predictable polarization. The main theoretical result is that there can be a series of such updates, each of which is individually expected to make you more accurate, but which together will predictably polarize you. Polarization results from asymmetric increases in accuracy. This mechanism is not only theoretically possible, but empirically plausible. I argue that cognitive search—searching a cognitively accessible space for a particular item—often yields asymmetrically ambiguous evidence, I present an experiment supporting its polarizing effects, and I use simulations to show how it can explain two of the core causes of polarization: confirmation bias and the group polarization effect. (shrink)
Enjoying great popularity in decision theory, epistemology, and philosophy of science, Bayesianism as understood here is fundamentally concerned with epistemically ideal rationality. It assumes a tight connection between evidential probability and ideally rational credence, and usually interprets evidential probability in terms of such credence. Timothy Williamson challenges Bayesianism by arguing that evidential probabilities cannot be adequately interpreted as the credences of an ideal agent. From this and his assumption that evidential probabilities cannot be interpreted as the actual credences of human (...) agents either, he concludes that no interpretation of evidential probabilities in terms of credence is adequate. I argue to the contrary. My overarching aim is to show on behalf of Bayesians how one can still interpret evidential probabilities in terms of ideally rational credence and how one can maintain a tight connection between evidential probabilities and ideally rational credence even if the former cannot be interpreted in terms of the latter. By achieving this aim I illuminate the limits and prospects of Bayesianism. (shrink)
To resolve the lottery paradox, the “no-justification account” proposes that one is not justified in believing that one's lottery ticket is a loser. The no-justification account commits to what I call “the Harman-style skepticism”. In reply, proponents of the no-justification account typically downplay the Harman-style skepticism. In this paper, I argue that the no-justification reply to the Harman-style skepticism is untenable. Moreover, I argue that the no-justification account is epistemically ad hoc. My arguments are based on a rather surprising finding (...) that the no-justification account implies that people living in Taiwan typically suffer from the Harman-style skepticism. (shrink)
Two principles in epistemology are apparent examples of the close connection between rationality and truth. First, adding a disjunct to what it is rational to believe yields a proposition that’s also rational to believe. Second, what’s likely if believed is rational to believe. While these principles are accepted by many, it turns out that they clash. In light of this clash, we must relinquish the second principle. Reflecting on its rationale, though, reveals that there are two distinct ways to understand (...) the connection between rationality and truth. Rationality is fundamentally a guide to the belief-independent truth, rather than a guide to acquiring true beliefs. And this in turn has important implications for current discussions of permissivism, epistemic reasons, and epistemic consequentialism. (shrink)
In this paper I present reasons for us to accept the hypothesis that suspended judgment has correctness conditions, just like beliefs do. Roughly put, the idea is that suspended judgment about p is correct when both p and ¬p might be true in view of certain facts that characterize the subject’s situation. The reasons to accept that hypothesis are broadly theoretical ones: it adds unifying power to our epistemological theories, it delivers good and conservative consequences, and it allows us to (...) assess processes of reasoning involving attitudes of suspended judgment. (shrink)
Eva Schmidt argues that facts about incoherent beliefs can be non-evidential epistemic reasons to suspend judgment. In this commentary, I argue that incoherence-based reasons to suspend are epistemically superfluous: if the subjects in Schmidt’s cases ought to suspend judgment, then they should do so merely on the basis of their evidential reasons. This suggests a more general strategy to reduce the apparent normativity of coherence to the normativity of evidence. I conclude with some remarks on the independent interest that reasons-first (...) epistemology might have within an evidentialist framework. (shrink)
In this article, I will consider whether, and in what way, doxastic states can harm. I’ll first consider whether, and in what way, a person’s doxastic state can harm her, before turning to the question of whether, and in what way, it can harm someone else.
This paper, published in 2022 in the Social Epistemology Review and Reply Collective (SERRC), offers a philosopher-psychologist’s explanation of our species’ deeply rooted resistance to self-knowledge. The article focuses on limitations that come about when people do not possess a group of cognitive and psychological skills and competencies which the author has called “epistemological intelligence.” ¶¶¶¶¶ The paper develops the idea of “one-way concepts,” concepts that can appropriately and informatively be applied to the human species, but which, due to human (...) species pride and exemptionalism, people often refuse to do.¶¶¶¶¶ When people lack the necessary skills and competences, they become resistant to new ways of thinking which can help our species break free from self-limiting and often self-destructive ways patterns of thought and behavior.¶¶¶¶¶ . (shrink)
Given the laws of our universe, the initial conditions and cosmological constants had to be "fine-tuned" to result in life. Is this evidence for design? We argue that we should be uncertain whether an ideal agent would take it to be so—but that given such uncertainty, we should react to fine-tuning by boosting our confidence in design. The degree to which we should do so depends on our credences in controversial metaphysical issues.
Se explora cómo la ciencia ciudadana promueve una mejora epistémica tanto en las instituciones científicas como en la sociedad a gran escala. En este sentido, se ofrece una caracterización de la ciencia ciudadana y a partir de ella se muestra cómo la participación de no especialistas contribuye al fortalecimiento epistémico a través de la pluralidad. Además, se examina cómo la inclusión de miembros de la sociedad en la investigación científica es capaz de promover la mejora epistémica de individuos mediante la (...) formación de pensadores críticos. Las conclusiones indican que la ciencia ciudadana puede optimizar estos dos tipos de mejora epistémica por medio del diseño de proyectos que, aunado a la persecución de objetivos propiamente científicos, se dirijan a la obtención de estos bienes. (shrink)
How is what we believe related to how we act? That depends on what we mean by ‘believe’. On the one hand, there is what we're sure of: what our names are, where we were born, whether we are sitting in front of a screen. Surety, in this sense, is not uncommon — it does not imply Cartesian absolute certainty, from which no possible course of experience could dislodge us. But there are many things that we think that we are (...) not sure of. For example, you might think that it will rain sometime this month, but not be sure that it will. Both what we're sure of and what we think have important normative connections to action. But the connections are quite different. This paper explores these issues with respect to assertion, inquiry, and decision making. We conclude by arguing that there is no theoretically significant notion of ‘full belief’ intermediate in strength between thinking and being sure. (shrink)
In this paper I’ll suggest that a certain challenge facing defeatist views about higher-order evidence cannot be met, namely, motivating principles that recommend abandoning belief in cases of higher order defeat, but do not recommend global scepticism. I’ll propose that, ultimately, the question of whether to abandon belief in response to the realization that our belief can’t be recovered from what I’ll call ‘a perspective of doubt’ can’t be answered through rational deliberation aimed at truth or accuracy.
Simple random sampling resolutions of the raven paradox relevantly diverge from scientific practice. We develop a stratified random sampling model, yielding a better fit and apparently rehabilitating simple random sampling as a legitimate idealization. However, neither accommodates a second concern, the objection from potential bias. We develop a third model that crucially invokes causal considerations, yielding a novel resolution that handles both concerns. This approach resembles Inference to the Best Explanation (IBE) and relates the generalization’s confirmation to confirmation of an (...) associated law. We give it an objective Bayesian formalization and discuss the compatibility of Bayesianism and IBE. (shrink)
Methodological naturalists regard scientific method as the only effective way of acquiring knowledge. Quite the contrary, traditional analytic philosophers reject employing scientific method in philosophy as illegitimate unless it is justified by the traditional methods. One of their attacks on methodological naturalism is the objection that it is either incoherent or viciously circular: any argument that may be offered for methodological naturalism either employs a priori methods or involves a vicious circle that ensues from employing the very method that the (...) argument is aimed to show its credentials. The charge of circularity has also been brought against the naturalistic arguments for specific scientific methods; like the inductive argument for induction and the abductive argument for the inference to the best explanation. In this paper, I respond to the charge of circularity using a meta-methodological rule that I call ‘reflexivity requirement.’ Giving two examples of philosophical works, I illustrate how the requirement has already been considered to be necessary for self-referential theories. At the end, I put forward a meta-philosophical explanation of the naturalism-traditionalism debate over the legitimate method of philosophy. (shrink)
This paper provides a critical overview of recent work on epistemic blame. The paper identifies key features of the concept of epistemic blame and discusses two ways of motivating the importance of this concept. Four different approaches to the nature of epistemic blame are examined. Central issues surrounding the ethics and value of epistemic blame are identified and briefly explored. In addition to providing an overview of the state of the art of this growing but controversial field, the paper highlights (...) areas where future work is needed. (shrink)
One challenge in developing an account of the nature of epistemic blame is to explain what differentiates epistemic blame from mere negative epistemic evaluation. The challenge is to explain the difference, without invoking practices or behaviors that seem out of place in the epistemic domain. In this paper, I examine whether the most sophisticated recent account of the nature of epistemic blame—due to Jessica Brown—is up for the challenge. I argue that the account ultimately falls short, but does so in (...) an instructive way. Drawing on the lessons learned, I put forward an alternative approach to the nature of epistemic blame. My account understands epistemic blame in terms of modifications to the intentions and expectations that comprise our “epistemic relationships” with one another. This approach has a number of attractions shared by Brown’s account, but it can also explain the significance of epistemic blame. (shrink)
The essential role of language in rational cognition is analysed. The approach is functional: only the results of the connection between language, reality, and thinking are considered. Scientific language is analysed as an extension and improvement of everyday language. The analysis gives a uniform view of language and rational cognition. The consequences for the nature of ontology, truth, logic, thinking, scientific theories, and mathematics are derived.
Humanity faces two fundamental problems of learning: learning about the universe, and learning to become civilized. We have solved the first problem, but not the second one, and that puts us in a situation of great danger. Almost all of our global problems have arisen as a result. It has become a matter of extreme urgency to solve the second problem. The key to this is to learn from our solution to the first problem how to solve the second one. (...) This was the basic idea of the 18th century Enlightenment, but in implementing this idea, the Enlightenment blundered. Their mistakes are still built into academia today. In order to le arn how to create a civilized, enlightened world, the key thing we need to do is to cure academia of the structural blunders we have inherited from the Enlightenment. We need to bring about a revolution in science, and in academia more broadly so that the basic aim becomes wisdom, and not just knowledge. (shrink)