Host-microbiome interactions (HMIs) are critical for the modulation of biological processes and are associated with several diseases, and extensive HMI studies have generated large amounts of data. We propose that the logical representation of the knowledge derived from these data and the standardized representation of experimental variables and processes can foster integration of data and reproducibility of experiments and thereby further HMI knowledge discovery. A community-based Ontology of Host-Microbiome Interactions (OHMI) was developed following the OBO Foundry principles. OHMI leverages established (...) ontologies to create logically structured representations of microbiomes, microbial taxonomy, host species, host anatomical entities, and HMIs under different conditions and associated study protocols and types of data analysis and experimental results. (shrink)
The paradox of pain refers to the idea that the folk concept of pain is paradoxical, treating pains as simultaneously mental states and bodily states (e.g. Hill 2005, 2017; Borg et al. 2020). By taking a close look at our pain terms, this paper argues that there is no paradox of pain. The air of paradox dissolves once we recognise that pain terms are polysemous and that there are two separate but related concepts of pain rather than one.
There is a long-standing disagreement in the philosophy of probability and Bayesian decision theory about whether an agent can hold a meaningful credence about an upcoming action, while she deliberates about what to do. Can she believe that it is, say, 70% probable that she will do A, while she chooses whether to do A? No, say some philosophers, for Deliberation Crowds Out Prediction (DCOP), but others disagree. In this paper, we propose a valid core for DCOP, and identify terminological (...) causes for some of the apparent disputes. (shrink)
Since the early nineteenth century a membrane or wall has been central to the cell’s identity as the elementary unit of life. Yet the literally and metaphorically marginal status of the cell membrane made it the site of clashes over the definition of life and the proper way to study it. In this article I show how the modern cell membrane was conceived of by analogy to the first “artificial cell,” invented in 1864 by the chemist Moritz Traube (1826–1894), and (...) reimagined by the plant physiologist Wilhelm Pfeffer (1845–1920) as a precision osmometer. Pfeffer’s artificial cell osmometer became the conceptual and empirical basis for the law of dilute solutions in physical chemistry, but his use of an artificial analogue to theorize the existence of the plasma membrane as distinct from the cell wall prompted debate over whether biology ought to be more closely unified with the physical sciences, or whether it must remain independent as the science of life. By examining how the histories of plant physiology and physical chemistry intertwined through the artificial cell, I argue that modern biology relocated vitality from protoplasmic living matter to nonliving chemical substances—or, in broader cultural terms, that the disenchantment of life was accompanied by the (re)enchantment of ordinary matter. (shrink)
In this paper we explore the relationship between norms of belief revision that may be adopted by members of a community and the resulting dynamic properties of the distribution of beliefs across that community. We show that at a qualitative level many aspects of social belief change can be obtained from a very simple model, which we call ‘threshold influence’. In particular, we focus on the question of what makes the beliefs of a community stable under various dynamical situations. We (...) also consider refinements and alternatives to the ‘threshold’ model, the most significant of which is to consider changes to plausibility judgements rather than mere beliefs. We show first that some such change is mandated by difficult problems with belief-based dynamics related to the need to decide on an order in which different beliefs are considered. Secondly, we show that the resulting plausibility-based account results in a deterministic dynamical system that is non-deterministic at the level of beliefs. (shrink)
_An Introduction to Chinese Philosophy_ unlocks the mystery of ancient Chinese philosophy and unravels the complexity of Chinese Buddhism by placing them in the contemporary context of discourse. Elucidates the central issues and debates in Chinese philosophy, its different schools of thought, and its major philosophers. Covers eight major philosophers in the ancient period, among them Confucius, Laozi, and Zhuangzi. Illuminates the links between different schools of philosophy. Opens the door to further study of the relationship between Chinese and Western (...) philosophy. (shrink)
The surface grammar of reports such as ‘I have a pain in my leg’ suggests that pains are objects which are spatially located in parts of the body. We show that the parallel construction is not available in Mandarin. Further, four philosophically important grammatical features of such reports cannot be reproduced. This suggests that arguments and puzzles surrounding such reports may be tracking artefacts of English, rather than philosophically significant features of the world.
As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...) because the concept of responsibility, despite being treated as such, is not monolithic: rather this seemingly unified concept consists of converging and confluent concepts that shape the idea of what we colloquially call responsibility. From a different perspective, robotics will be simultaneously responsible and irresponsible depending on the particular concept of responsibility that is foregrounded: an observation that cuts against the grain of the drive towards responsible robotics. This problem is further compounded by responsible design and development as contrasted to responsible use. From a different perspective, the difficulty in defining the concept of responsibility in robotics is because human responsibility is the main frame of reference. Robotic systems are increasingly expected to achieve the human-level performance, including the capacities associated with responsibility and other criteria which are necessary to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side of responsibility coin. The major distinction of this approach is to invert the power relationship: while human agents are perceived to control robotic patients, the prospect for this to become reversed is beginning. As robotic technologies become ever more sophisticated, and even genuinely complex, asserting human rights directly against robotic harms become increasingly important. Such an approach includes not only developing human rights that ‘protect’ humans but also ‘strengthen’ people against the challenges introduced by robotics and AI [This distinction parallels Berlin’s negative and positive concepts of liberty ], by emphasising the social and reflective character of the notion of humanness as well as the difference between the human and nonhuman. This will allow using the human frame of reference as constitutive of, rather than only subject to, the robotic and AI technologies, where it is human and not technology characteristics that shape the human rights framework in the first place. (shrink)
(Recipient of the 2020 Everett Mendelsohn Prize.) This article revisits the development of the protoplasm concept as it originally arose from critiques of the cell theory, and examines how the term “protoplasm” transformed from a botanical term of art in the 1840s to the so-called “living substance” and “the physical basis of life” two decades later. I show that there were two major shifts in biological materialism that needed to occur before protoplasm theory could be elevated to have equal status (...) with cell theory in the nineteenth century. First, I argue that biologists had to accept that life could inhere in matter alone, regardless of form. Second, I argue that in the 1840s, ideas of what formless, biological matter was capable of dramatically changed: going from a “coagulation paradigm” that had existed since Theophrastus, to a more robust conception of matter that was itself capable of movement and self-maintenance. In addition to revisiting Schleiden and Schwann’s original writings on cell theory, this article looks especially closely at Hugo von Mohl’s definition of the protoplasm concept in 1846, how it differed from his primordial utricle theory of cell structure two years earlier. This article draws on Lakoff and Johnson’s theory of “ontological metaphors” to show that the cell, primordial utricle, and protoplasm can be understood as material container, object, and substance, and that these overlapping distinctions help explain the chaotic and confusing early history of cell theory. (shrink)
With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together (...) inform the idea of responsibility. Two different approaches are offered to the question of responsibility and autonomous vehicles: targeting and risk distribution. The article then introduces a thought experiment which situates autonomous vehicles within the context of crash optimisation impulses and coordinated or networked decision-making. It argues that guiding ethical frameworks overlook compound or aggregated effects which may arise, and which can lead to subtle forms of structural discrimination. Insofar as such effects remain unrecognised by the legal systems relied upon to remedy them, the potential for societal inequalities is increased and entrenched, situations of injustice and impunity may be unwittingly maintained. This second set of concerns may represent a hitherto overlooked type of responsibility gap arising from inadequate accountability processes capable of challenging systemic risk displacement. (shrink)
Mandarin focus particles systematically have heterogeneous uses. By examining details of two focus particles jiu ‘only’ and dou ‘even’, this paper explores the hypothesis that varieties of alternatives give rise to systematic ‘ambiguities’. Specifically, by positing sum-based alternative sets and atom-based ones, it maintains unambiguous semantics of jiu as onlyweak and dou as even, while deriving their variability through interaction with alternatives. Independently motivated analyses of distributive/collective readings and contrastive topics, combined with varieties of alternatives, deliver the full range of (...) facts concerning jiu and dou. Theoretically, the paper illustrates an integration of Link, Landman’s theory of pluralites into Rooth’s alternative semantics. (shrink)
In this paper, a criticism of the traditional theories of approximation and idealization is given as a summary of previous works. After identifying the real purpose and measure of idealization in the practice of science, it is argued that the best way to characterize idealization is not to formulate a logical model – something analogous to Hempel's D-N model for explanation – but to study its different guises in the praxis of science. A case study of it is then made (...) in thermostatistical physics. After a brief sketch of the theories for phase transitions and critical phenomena, I examine the various idealizations that go into the making of models at three difference levels. The intended result is to induce a deeper appreciation of the complexity and fruitfulness of idealization in the praxis of model-building, not to give an abstract theory of it. (shrink)
This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage's formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.
Slurs are derogatory words and they are used to derogate certain groups. Theories of slurs must explain why they are derogatory words, as well as other features like independence and descriptive ineffability. This paper proposes an illocutionary force indicator theory of slurs: they are derogatory terms because their use is to perform the illocutionary act of derogation, which is a declarative illocutionary act to enforce norms against the target. For instance, calling a Chinese person “chink” is an act of derogation (...) to enforce racist norms that license exclusion of the Chinese, deny their rights to dignity, etc. The contribution of this paper is twofold. First, it offers a more comprehensive explanation of the features of slurs than earlier speech act approaches. Second, it provides a theory that is immune to the problems faced by existing theories, such as wrong predictions of truth-conditions, explaining unacceptability to non-bigots, and explaining slurs against the dominant groups. (shrink)
Statements not only update our current knowledge, but also have other dynamic effects. In particular, suggestions or commands ?upgrade' our preferences by changing the current order among worlds. We present a complete logic of knowledge update plus preference upgrade that works with dynamic-epistemic-style reduction axioms. This system can model changing obligations, conflicting commands, or ?regret'. We then show how to derive reduction axioms from arbitrary definable relation changes. This style of analysis also has a product update version with preferences between (...) actions, as well as worlds. Some illustrations are presented involving defaults and obligations. We conclude that our dynamic framework is viable, while admitting a further extension to more numerical ?utility update'. (shrink)
This study focuses on examining the thematic landscape of the history of scholarly publication in business ethics. We analyze the titles, abstracts, full texts, and citation information of all research papers published in the field’s leading journal, the Journal of Business Ethics, from its inaugural issue in February 1982 until December 2016—a dataset that comprises 6308 articles and 42 million words. Our key method is a computational algorithm known as probabilistic topic modeling, which we use to examine objectively the field’s (...) latent thematic landscape based on the vast volume of scholarly texts. This “big-data” approach allows us not only to provide time-specific snapshots of various research topics, but also to track the dynamic evolution of each topic over time. We further examine the pattern of individual papers’ topic diversity and the influence of individual papers’ topic diversity on their impact over time. We conclude this study with our recommendation for future studies in business ethics research. (shrink)
Most theories of slurs fall into one of two families: those which understand slurring terms to involve special descriptive/informational content (however conveyed), and those which understand them to encode special emotive/expressive content. Our view is that both offer essential insights, but that part of what sets slurs apart is use-theoretic content. In particular, we urge that slurring words belong at the intersection of a number of categories in a sociolinguistic register taxonomy, one that usually includes [+slang] and [+vulgar] and always (...) includes [-polite] and [+derogatory]. Thus, e.g., what distinguishes ‘Chinese’ from ‘chink’ is neither a peculiar sort of descriptive nor emotional content, but rather the fact that ‘chink’ is lexically marked as belonging to different registers than ‘Chinese’. It is, moreover, partly such facts which makes slurring ethically unacceptable. (shrink)
I first give a brief summary of a critique of the traditional theories of approximation and idealization; and after identifying one of the major roles of idealization as detaching component processes or systems from their joints, a detailed analysis is given of idealized laws – which are discoverable and/or applicable – in such processes and systems (i.e., idealized model systems). Then, I argue that dispositional properties should be regarded as admissible properties for laws and that such an inclusion supplies the (...) much needed connection between idealized models and the laws they `produce'' or `accommodate''. And I then argue that idealized law-statements so produced or accommodated in the models may be either true simpliciter or true approximately, but the latter is not because of the idealizations involved. I argue that the kind of limiting-case idealizations that produce approximate truth is best regarded as approximation; and finally I compare my theory with some existing theories of laws of nature.We seem to trace [in KingLear] ... the tendency of imagination toanalyse and abstract, to decomposehuman nature into its constituentfactors, and then to construct beings in whomone or more of these factors isabsent or atrophied or only incipient. (shrink)
The paper discusses the recent literature on abstraction/idealization in connection with the “paradox of infinite idealization.” We use the case of taking thermodynamics limit in dealing with the phenomena of phase transition and critical phenomena to broach the subject. We then argue that the method of infinite idealization is widely used in the practice of science, and not all uses of the method are the same. We then confront the compatibility problem of infinite idealization with scientific realism. We propose and (...) defend a contextualist position for realism and argue that the cases for infinite idealization appear to be fully compatible with contextual realism. (shrink)
Globalization has changed almost every facet of life for people around the world, and today the flow of influence is no longer uni-directional. It is argued that East Asian societies are anchored in an indigenous form of hierarchical relationalism where social structure is produced by relational obligations of an ethical and normative nature that have slowed its traditional culture “melting into air” as prophesied by Marx. The successfully modernization of East Asia has involved hybridization, compartmentalization, and sequencing of traditional psychological (...) features of Confucianist societies such as delay of gratification and respect for education, paternalistic leadership, filial piety, and beliefs in harmony or benevolence. Features of hierarchical relationalism are adaptable to creating niches for East Asian societies that thrive under globalization as characterized by the paradoxical coupling of economic inequality in fact with discourses of equality in principle. Moral, ethical demands for enlightened leadership constrain East Asian elites to at least attempt to protect subordinates and protect societal well-being. A fundamental contribution of East Asia to global society may be in the articulation of how to ameliorate economic inequality using Confucian principles of hierarchical relationalism. (shrink)
In this paper, we establish a reaction-diffusion predator-prey model with weak Allee effect and delay and analyze the conditions of Turing instability. The effects of Allee effect and delay on pattern formation are discussed by numerical simulation. The results show that pattern formations change with the addition of weak Allee effect and delay. More specifically, as Allee effect constant and delay increases, coexistence of spotted and stripe patterns, stripe patterns, and mixture patterns emerge successively. From an ecological point of view, (...) we find that Allee effect and delay play an important role in spatial invasion of populations. (shrink)
Recent research has uncovered the dark side of creativity by finding that creative individuals are more likely to engage in unethical behavior. However, we argue that not all creative individuals make trouble. Using moral self-regulation theory as our overarching theoretical framework, we examine individuals’ moral identity as a boundary condition and moral disengagement as a mediating mechanism to explain when and how individual creativity is associated with workplace deviant behavior. We conducted two field studies using multi-source data to test our (...) hypotheses. In Study 1, the results indicated that creativity positively predicted moral disengagement for those low in moral identity. In Study 2 with multi-wave data, we replicated the finding that moral identity moderated the effect of creativity on moral disengagement in Study 1 and further revealed that moral disengagement mediated the interactive effects of creativity and moral identity on workplace deviant behavior. The theoretical and practical implications of these findings and directions for future research are discussed. (shrink)
Let Λ be a singular cardinal of uncountable confinality ψ. Under various assumptions about the sizes of covering families for cardinals below Λ, we prove upper bounds for the covering number cov(Λ, Λ, v⁺, 2). This covering number is closely related to the cofinality of the partial order ([Λ]", ⊆).
We investigate the extent to which perceptions of the authenticity of supervisor’s personal integrity and character (ASPIRE) moderate the relationship between people’s love of money (LOM) and propensity to engage in unethical behavior (PUB) among 266 part-time employees who were also business students in a five-wave panel study. We found that a high level of ASPIRE perceptions was related to high love-of-money orientation, high self-esteem, but low unethical behavior intention (PUB). Unethical behavior intention (PUB) was significantly correlated with their high (...) Machiavellianism, low self-esteem, and low intrinsic religiosity. Our counterintuitive results revealed that the main effect of LOM on PUB was not significant, but the main effect of ASPIRE on PUB was significant. Further, the significant interaction effect between LOM and ASPIRE on unethical behavior intention provided profoundly interesting findings: High LOM was related to high unethical behavior intention for people with low ASPIRE, but was related to low unethical intention for those with high ASPIRE. People with high LOM and low ASPIRE had the highest unethical behavior intention; whereas those with high LOM and high ASPIRE had the lowest. We discuss results in light of individual differences, ethical environment, and perceived demand characteristics. (shrink)
Review of: Sophia Roosth, Synthetic: How Life Got Made (University of Chicago Press, 2017); and Andrew S. Balmer, Katie Bulpin, and Susan Molyneux-Hodgson, Synthetic Biology: A Sociology of Changing Practices (Palgrave Macmillan, 2016).
'I never can catch myself at any time without a perception, and never can observe any thing but the perception.' These famous words of David Hume, on his inability to perceive the self, set the stage for JeeLoo Liu and John Perry's collection of essays on self-awareness and self-knowledge. This volume connects recent scientific studies on consciousness with the traditional issues about the self explored by Descartes, Locke and Hume. Experts in the field offer contrasting perspectives on matters such as (...) the relation between consciousness and self-awareness, the notion of personhood and the epistemic access to one's own thoughts, desires or attitudes. The volume will be of interest to philosophers, psychologists, neuroscientists, cognitive scientists and others working on the central topics of consciousness and the self. (shrink)
Interindustry linkage analysis is an important interdisciplinary research field of technical economic and complex systems, and the results can be used as critical bases for making strategies and policies of economic development. This study reviews the previous methods for measuring interindustry linkages and their disadvantages and puts forward a new method for interindustry linkage analysis in a complex economic system on the basis of demand-driven and multisector input-output model. Firstly, it makes a further decomposition of the Leontief inverse matrix in (...) the economic sense and decomposes the gross output of one industrial sector or its sub-industries into three components. Then, it analyzes the structural features of output and measures the interindustry linkages between two industrial sectors with three indices: interindustry linkage effect, interindustry linkage contribution, and interindustry linkage coefficient. Compared with the previous measurements, the method in this study has three obvious advantages: it integrates the sectoral internal effect and external linkage effect at the same time; it can not only measure the interindustry linkage effects between two given industrial sectors but also clearly describe the composition ratio of the direct and indirect interindustry linkage effects; and it adopts, respectively, the absolute flow value, relative flow value, and unit relative value to measure the linkages comprehensively. Finally, this study takes China’s input and output in 2017 as an application case to analyze the structural features of output of its manufacturing and producer services and measure the interindustry linkages between them. (shrink)
Fecal microbiota transplantation has demonstrated efficacy and is increasingly being used in the treatment of patients with recurrent Clostridium difficile infection. Despite a lack of high-quality trials to provide more information on the long-term effects of FMT, there has been great enthusiasm about the potential for expanding its applications. However, FMT presents many serious ethical and social challenges that must be addressed as part of a successful regulatory policy response. In this article, we draw on a sample of the scientific (...) and bioethics literatures to examine clusters of ethical and social issues arising in five main areas: informed consent and the vulnerability of patients; determining what a “suitable healthy donor” is; safety and risk; commercialization and potential exploitation of vulnerable patients; and public health implications. We find that these issues are complex and worthy of careful consideration by health care professionals. Desperation of a patient should not be the basis for selecting treatment with FMT, and the patient's interests should always be of paramount concern. Authorities must prioritize development of appropriate and effective regulation of FMT to safeguard patients and donors, promote further research into safety and efficacy, and avoid abuse of the treatment. (shrink)
Biobanks are potential goldmines for genomics research. They have become increasingly common as a means to determine the relationship between lifestyle, environmental exposures and predisposition to genetic disease. More and more countries are developing massive national scale biobanks, including Iceland, the UK and Estonia. Now several large-scale regional and national biobanks are planned in China, such as Shanghai Biobank, which is defined as a key-element in Shanghai's twelfth five-year Development Plan of Science and Technology. It is imperative that the authors (...) who are in charge of the ethical aspect of Shanghai Biobank discuss the ethical aspects of these biobanks up front. Currently there is a great deal of heterogeneity in the approaches to informed consent taken by different countries. In the article, after briefly introducing the biobanks in China, we focus on the three most common approaches: classical informed consent, tiered consent, and one-time general (or blanket) consent, and propose a version of the latter for China, based on compelling arguments. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
In _Sino-Theology and the Philosophy of History_ Leopold Leeb presents the ideas of an influential Chinese intellectual, Liu Xiaofeng, whose approach to the question of a Christian theology for China is both controversial and inspiring.
Past work has shown systematic differences between Easterners' and Westerners' intuitions about the reference of proper names. Understanding when these differences emerge in development will help us understand their origins. In the present study, we investigate the referential intuitions of English- and Chinese-speaking children and adults in the U.S. and China. Using a truth-value judgment task modeled on Kripke's classic Gödel case, we find that the cross-cultural differences are already in place at age seven. Thus, these differences cannot be attributed (...) to later education or enculturation. Instead, they must stem from differences that are present in early childhood. We consider alternate theories of reference that are compatible with these findings and discuss the possibility that the cross-cultural differences reflect differences in perspective-taking strategies. (shrink)
This study treats firm productivity as an accumulation of productive intangibles and posits that stakeholder engagement associated with better corporate social performance helps develop such intangibles. We hypothesize that because shareholders factor improved productive efficiency into stock price, productivity mediates the relationship between corporate social and financial performance. Furthermore, we argue that key stakeholders’ social considerations are more valuable for firms with higher levels of discretionary cash and income stream uncertainty. Therefore, we hypothesize that those two contingencies moderate the mediated (...) process of corporate social performance with financial performance. Our analysis, based on a comprehensive longitudinal dataset of the U.S. manufacturing firms from 1992 to 2009, lends strong support for these hypotheses. In short, this paper uncovers a productivity-based, context-dependent mechanism underlying the relationship between corporate social performance and financial performance. (shrink)
Diversity of agents occurs naturally in epistemic logic, and dynamic logics of information update and belief revision. In this paper we provide a systematic discussion of different sources of diversity, such as introspection ability, powers of observation, memory capacity, and revision policies, and we show how these can be encoded in dynamic epistemic logics allowing for individual variation among agents. Next, we explore the interaction of diverse agents by looking at some concrete scenarios of communication and learning, and we propose (...) a logical methodology to deal with these as well. We conclude with some further questions on the logic of diversity and interaction. (shrink)
Recent studies have revealed that the temporal lobe, a cortical region thought to be in charge of episodic and semantic memory, is involved in creative insight. This work examines the contributions of discrete temporal regions to insight. Activity in the medial temporal regions is indicative of novelty recognition and detection, which is necessary for the formation of novel associations and the “Aha!” experience. The fusiform gyrus mainly affects the formation of gestalt-like representation and perspective taking. The anterior and posterior middle (...) temporal gyri are individually associated with extensive semantic processing and inhibiting salient or routine word associations, which are necessary to form non-salient, novel and remote associations. The anterior and posterior superior temporal gyri are individually responsible for integrating/binding and accessing various types of available conceptual representations. Based on the current knowledge, an integrated model of the temporal lobe's role in insight and some future directions are proposed. (shrink)
Page generated Thu Aug 5 21:18:19 2021 on philpapers-web-65948fd446-qrpbq
cache stats: hit=26570, miss=23454, save= autohandler : 690 ms called component : 671 ms search.pl : 515 ms initIterator : 300 ms render loop : 211 ms menu : 102 ms next : 93 ms addfields : 74 ms quotes : 44 ms autosense : 40 ms match_cats : 36 ms search_quotes : 23 ms prepCit : 16 ms publicCats : 14 ms retrieve cache object : 14 ms applytpl : 4 ms intermediate : 2 ms match_other : 2 ms match_authors : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms