To determine whether fish welfare matters morally, we need to know what characteristics or capacities beings need to have in order to be morally considerable, and whether fish have such characteristics. In this paper I discuss a group of theories, Kantian practical reasoning theories, in which agency (or practical rationality) is traditionally thought to be a necessary condition for moral considerability. An individual must have quite sophisticated capacities to be a (moral) agent in such theories: she must be able to (...) act on rational principles. It seems unlikely that nonhuman animals such as fish have such capacities. I argue, however, that on the basis of certain Kantian arguments, moral agents have reason to accept duties to nonrational animals if they are agents in a much less demanding sense: if they are motivated to pursue the objects of their desires. If fish have this capacity, their welfare matters morally. (shrink)
This article sets out to research and resolve the conceptual lag between the family as defined and recognised in law and the multiplicity of queer constellations of ‘intimate citizenship’ in which families are actually done. The focus is on adult unions outside of conjugal coupledom. The family law practices, and awareness and expectations of adults in such unions were analysed through 21 interviews and the content analysis of 40 documents and were projected against the applicable legal mould. The article then (...) proposes to resolve the existing conceptual lag by advancing a conception of the family as a malleable, open-ended assemblage, in lieu of the current rigid status approach in family law, which is both under- and overinclusive. The proposed conception does justice to the increasing fluidity of family formations, which are not always already domestic, dyadic and sexual. (shrink)
We characterise the intermediate logics which admit a cut-free hypersequent calculus of the form \, where \ is the hypersequent counterpart of the sequent calculus \ for propositional intuitionistic logic, and \ is a set of so-called structural hypersequent rules, i.e., rules not involving any logical connectives. The characterisation of this class of intermediate logics is presented both in terms of the algebraic and the relational semantics for intermediate logics. We discuss various—positive as well as negative—consequences of this characterisation.
We propose simple nonlinear mathematical models for the legal concept of balancing of interests. Our aim is to bridge the gap between an abstract formalisation of a balancing decision while assuring consistency and ultimately legal certainty across cases. We focus on the conflict between the rights to privacy and to the protection of personal data in Art. 7 and Art. 8 of the EU Charter of Fundamental Rights (EUCh) against the right of access to information derived from Art. 11 EUCh. (...) These competing rights are denoted by ( \(i_1\) ) _right to privacy _ and ( \(i_2\) ) _access to information_; mathematically, their indices are respectively assigned by \(u_1\in [0,1]\) and \(u_2\in [0,1]\) subject to the constraint \(u_1+u_2=1\). This constraint allows us to use one single index _u_ to resolve the conflict through balancing. The outcome will be concluded by comparing the index _u_ with a prior given threshold \(u_0\). For simplicity, we assume that the balancing depends on only selected legal criteria such as the social status of affected person, and the sphere from which the information originated, which are represented as inputs of the models, called legal parameters. Additionally, we take “time” into consideration as a legal criterion, building on the European Court of Justice’s ruling on the right to be forgotten: by considering time as a legal parameter, we model how the outcome of the balancing changes over the passage of time. To catch the dependence of the outcome _u_ by these criteria as legal parameters, data were created by a fully-qualified lawyer. By comparison to other approaches based on machine learning, especially neural networks, this approach requires significantly less data. This might come at the price of higher abstraction and simplification, but also provides for higher transparency and explainability. Two mathematical models for _u_, a time-independent model and a time-dependent model, are proposed, that are fitted by using the data. (shrink)
We study a logic for deontic necessity and sufficiency, as originally proposed in van Benthem :36–41, 1979). Building on earlier work in modal logic, we provide a sound and complete axiomatization for it, consider some standard extensions, and study other important properties. After that, we compare this logic to the logic of “obligation as weakest permission” from Anglberger et al. :807–827, 2015).
This article is an investigation of parallel themes in Heinrich Hertz's philosophy science and Kant's theory of schemata, symbols and regulative ideas. It is argued that Hertz's "pictures" bears close similarities to Kantian "schemata", that is, they are rules linking concepts to intuitions and provide them with their meaning. Kant's distinction between symbols and schemata is discussed and related to Hertz's three pictures of mechanics. It is argued that Hertz considered his own picture of mechanics as symbolic in a different (...) way than the force and energy pictures. In the final part of the article it is described how Harald Høffding soon after the publication of Hertz's Principles of Mechanics developed a general theory of analogical reasoning, relying on the ideas of Hertz and Kant. (shrink)
This article is an investigation of parallel themes in Heinrich Hertz's philosophy science and Kant's theory of schemata, symbols and regulative ideas. It is argued that Hertz's "pictures" bears close similarities to Kantian "schemata", that is, they are rules linking concepts to intuitions and provide them with their meaning. Kant's distinction between symbols and schemata is discussed and related to Hertz's three pictures of mechanics. It is argued that Hertz considered his own picture of mechanics (the "hidden mass" picture) as (...) symbolic in a different way than the force and energy pictures. In the final part of the article it is described how Harald Høffding soon after the publication of Hertz's Principles of Mechanics developed a general theory of analogical reasoning, relying on the ideas of Hertz and Kant. (shrink)
SUMMARYThe history of the law of nations is generally seen as a synonym for the history of the laws of war. Yet, a strictly bilateral perspective can distort our interpretation of early modern diplomacy. The Peace of Utrecht inaugurated an era of relative stability in the European state system, based on balance-of-power politics and anti-hegemonic legal argumentation. Incidental conflicts ought to be interpreted against this background. Declarations of war issued in 1718, 1719 and 1733 during the War of the Quadruple (...) Alliance and the Polish Succession should not be read as doctrinal surrogates for trials between two parties, but as manifestos in a European arena. (shrink)
Recent data have provided evidence for an unrecognised ancient lineage of green plants that persists in marine deep-water environments. The green plants are a major group of photosynthetic eukaryotes that have played a prominent role in the global ecosystem for millions of years. A schism early in their evolution gave rise to two major lineages, one of which diversified in the world's oceans and gave rise to a large diversity of marine and freshwater green algae (Chlorophyta) while the other gave (...) rise to a diverse array of freshwater green algae and the land plants (Streptophyta). It is generally believed that the earliest-diverging Chlorophyta were motile planktonic unicellular organisms, but the discovery of an ancient group of deep-water seaweeds has challenged our understanding of the basal branches of the green plant phylogeny. In this review, we discuss current insights into the origin and diversification of the green plant lineage. (shrink)
Artifacts are probably our most obvious everyday encounter with technology. Therefore, a good understanding of the nature of technical artifacts is a relevant part of technological literacy. In this article we draw from the philosophy of technology to develop a conceptualization of technical artifacts that can be used for educational purposes. Furthermore we report a small exploratory empirical study to see to what extent teachers’ intuitive ideas about artifacts match with the way philosophers write about the nature of artifacts. Finally, (...) we suggest a teaching and learning strategy for improving teachers’ concepts of technical artifacts through practical activities. (shrink)
This volume is the first collection of articles dedicated to Wittgenstein s thoughts on colour, focusing in particular on his so-called Remarks on Colour, a piece of writing that has received comparably little attention from Wittgenstein scholars. The articles discuss why Wittgenstein wrote so intensively about colour during the last years of his life andwhat significance these remarks have for understanding his philosophical work in general.".
On the basis of the Suppes–Sneed structuralview of scientific theories, we take a freshlook at the concept of refutability,which was famously proposed by K.R. Popper in 1934 as a criterion for the demarcation of scientific theories from non-scientific ones, e.g., pseudo-scientificand metaphysical theories. By way of an introduction we argue that a clash between Popper and his critics on whether scientific theories are, in fact, refutablecan be partly explained by the fact Popper and his criticsascribed different meanings to the term (...) theoryThen we narrow our attention to one particular theory,namely quantum mechanics, in order to elucidate general matters discussed. We prove that quantum mechanics is irrefutable in a rather straightforward sense, but argue that it is refutable in a more sophisticated sense, which incorporates someobservations obtained by looking closely at the practiceof physics. We shall locate exactly where non-rigourous elements enter the evaluation of a scientific theory – thismakes us see clearly how fruitful mathematics isfor the philosophy of science. (shrink)
Current European innovation and security policies are increasingly channeled into efforts to address the assumed challenges that threaten European societies. A field in which this has become particularly salient is digitized EU border management. Here, the framework of responsible research and innovation has recently been used to point to the alleged sensitivity of political actors towards the contingent dimensions of emerging security technologies. RRI, in general, is concerned with societal needs and the engagement and inclusion of various stakeholder groups in (...) the research and innovation processes, aiming to anticipate undesired consequences of and identifying socially acceptable alternatives for emerging technologies. However, RRI has also been criticized as an industry-driven attempt to gain societal legitimacy for new technologies. In this article, we argue that while RRI evokes a space where different actors enter co-creative dialogues, it lays bare the specific challenges of governing security innovation in socially responsible ways. Empirically, we draw on the case study of BODEGA, the first EU funded research project to apply the RRI framework to the field of border security. We show how stakeholders involved in the project represent their work in relation to RRI and the resulting benefits and challenges they face. The paper argues that applying the framework to the field of security lays bare its limitations, namely that RRI itself embodies a political agenda, conceals alternative experiences by those on whom security is enacted upon and that its key propositions of openness and transparency are hardly met in practice due to confidentiality agreements. Our hope is to contribute to work on RRI and emerging debates about how the concept can be contextualized for the field of security—a field that might be more in need than any other to consider the ethical dimension of its activities. (shrink)
In a recent paper, Jeanne Peijnenburg and David Atkinson [ Studia Logica, 89:333-341 ] have challenged the foundationalist rejection of infinitism by giving an example of an infinite, yet explicitly solvable regress of probabilistic justification. So far, however, there has been no criterion for the consistency of infinite probabilistic regresses, and in particular, foundationalists might still question the consistency of the solvable regress proposed by Peijnenburg and Atkinson.
This paper discusses the uniqueness thesis, a core thesis in the epistemology of disagreement. After presenting uniqueness and clarifying relevant terms, a novel counterexample to the thesis will be introduced. This counterexample involves logical disagreement. Several objections to the counterexample are then considered, and it is argued that the best responses to the counterexample all undermine the initial motivation for uniqueness.
This paper aims to show that Selim Berker’s widely discussed prime number case is merely an instance of the well-known generality problem for process reliabilism and thus arguably not as interesting a case as one might have thought. Initially, Berker’s case is introduced and interpreted. Then the most recent response to the case from the literature is presented. Eventually, it is argued that Berker’s case is nothing but a straightforward consequence of the generality problem, i.e., the problematic aspect of the (...) case for process reliabilism (if any) is already captured by the generality problem. (shrink)
BackgroundThe preferable position of Deep Brain Stimulation electrodes is proposed to be located in the dorsolateral subthalamic nucleus to improve general motor performance. The optimal DBS electrode localization for the post-operative improvement of balance and gait is unknown.MethodsIn this single-center, retrospective analyses, 66 Parkinson’s disease patients were assessed pre- and post-operatively by using MDS-UPDRS, freezing of gait score, Giladi’s gait and falls questionnaire and Berg balance scale. The clinical outcome was related to the DBS electrode coordinates in x, y, z (...) plane as revealed by image-based reconstruction. Binomial generalized linear mixed models with fixed-effect variables electrode asymmetry, parkinsonian subtype, medication, age class and clinical DBS induced changes were analyzed.ResultsSubthalamic nucleus-deep brain stimulation improved all motor, balance and FoG scores in MED OFF condition, however there were heterogeneous results in MED ON condition. DBS electrode reconstructed coordinates impacted the responsiveness of axial symptoms. FoG and balance responders showed slightly more medially located STN electrode coordinates and less medio-lateral asymmetry of the electrode reconstructed coordinates across hemispheres compared to non-responders.ConclusionDeep brain stimulation electrode reconstructed coordinates, particularly electrode asymmetry on the medio-lateral axis affected the post-operative responsiveness of balance and FoG symptoms in PD patients. (shrink)
We present a logic, $$\mathbf {ELI^r}$$ ELI r, for the discovery of deterministic causal regularities starting from empirical data. Our approach is inspired by Mackie’s theory of causes as INUS-conditions, and implements a more recent adjustment to Mackie’s theory according to which the left-hand side of causal regularities is required to be a minimal disjunction of minimal conjunctions. To derive such regularities from a given set of data, we make use of the adaptive logics framework. Our knowledge of deterministic causal (...) regularities is, as Mackie noted, most often gappy or elliptical. The adaptive logics framework is well-suited to explicate both the internal and the external dynamics of the discovery of such gappy regularities. After presenting $$\mathbf {ELI^r}$$ ELI r, we first discuss these forms of dynamics in more detail. Next, we consider some criticisms of the INUS-account and show how our approach avoids them, and we compare $$\mathbf {ELI^r}$$ ELI r with the CNA algorithm that was recently proposed by Michael Baumgartner. (shrink)
The paper gives a detailed reconstruction and discussion of Peirce’s doctrine of propositions, so-called Dicisigns, developed in the years around 1900. The special features different from the logical mainstream are highlighted: the functional definition not dependent upon conscious stances nor human language, the semiotic characterization extending propositions and quasi-propositions to cover prelinguistic and prehuman occurrences of signs, the relations of Dicisigns to the conception of facts, of diagrammatical reasoning, of icons and indices, of meanings, of objects, of syntax in Peirce’s (...) logic-as-semiotics. (shrink)
Following Lauwers and Van Liedekerke (1995), this paper explores in a model-theoretic framework the relation between Arrovian aggregation rules and ultraproducts, in order to investigate a source of impossibility results for the case of an infinite number of individuals and an aggregation rule based on a free ultrafilter of decisive coalitions.
This note employs the recently established consistency theorem for infinite regresses of probabilistic justification (Herzberg in Stud Log 94(3):331–345, 2010) to address some of the better-known objections to epistemological infinitism. In addition, another proof for that consistency theorem is given; the new derivation no longer employs nonstandard analysis, but utilises the Daniell–Kolmogorov theorem.
The rejection of an infinitesimal solution to the zero-fit problem by A. Elga ([2004]) does not seem to appreciate the opportunities provided by the use of internal finitely-additive probability measures. Indeed, internal laws of probability can be used to find a satisfactory infinitesimal answer to many zero-fit problems, not only to the one suggested by Elga, but also to the Markov chain (that is, discrete and memory-less) models of reality. Moreover, the generalization of likelihoods that Elga has in mind is (...) not as hopeless as it appears to be in his article. In fact, for many practically important examples, through the use of likelihoods one can succeed in circumventing the zero-fit problem. 1 The Zero-fit Problem on Infinite State Spaces 2 Elga's Critique of the Infinitesimal Approach to the Zero-fit Problem 3 Two Examples for Infinitesimal Solutions to the Zero-fit Problem 4 Mathematical Modelling in Nonstandard Universes? 5 Are Nonstandard Models Unnatural? 6 Likelihoods and Densities A Internal Probability Measures and the Loeb Measure Construction B The (Countable) Coin Tossing Sequence Revisited C Solution to the Zero-fit Problem for a Finite-state Model without Memory D An Additional Note on ‘Integrating over Densities’ E Well-defined Continuous Versions of Density Functions. (shrink)
The problem of how to rationally aggregate probability measures occurs in particular when a group of agents, each holding probabilistic beliefs, needs to rationalise a collective decision on the basis of a single ‘aggregate belief system’ and when an individual whose belief system is compatible with several probability measures wishes to evaluate her options on the basis of a single aggregate prior via classical expected utility theory. We investigate this problem by first recalling some negative results from preference and judgment (...) aggregation theory which show that the aggregate of several probability measures should not be conceived as the probability measure induced by the aggregate of the corresponding expected utility preferences. We describe how McConway’s :410–414, 1981) theory of probabilistic opinion pooling can be generalised to cover the case of the aggregation of infinite profiles of finitely additive probability measures, too; we prove the existence of aggregation functionals satisfying responsiveness axioms à la McConway plus additional desiderata even for infinite electorates. On the basis of the theory of propositional-attitude aggregation, we argue that this is the most natural aggregation theory for probability measures. Our aggregation functionals for the case of infinite electorates are neither oligarchic nor integral-based and satisfy a weak anonymity condition. The delicate set-theoretic status of integral-based aggregation functionals for infinite electorates is discussed. (shrink)