DBS Think Tank IX was held on August 25–27, 2021 in Orlando FL with US based participants largely in person and overseas participants joining by video conferencing technology. The DBS Think Tank was founded in 2012 and provides an open platform where clinicians, engineers and researchers can freely discuss current and emerging deep brain stimulation technologies as well as the logistical and ethical issues facing the field. The consensus among the DBS Think Tank IX speakers was that DBS expanded in (...) its scope and has been applied to multiple brain disorders in an effort to modulate neural circuitry. After collectively sharing our experiences, it was estimated that globally more than 230,000 DBS devices have been implanted for neurological and neuropsychiatric disorders. As such, this year’s meeting was focused on advances in the following areas: neuromodulation in Europe, Asia and Australia; cutting-edge technologies, neuroethics, interventional psychiatry, adaptive DBS, neuromodulation for pain, network neuromodulation for epilepsy and neuromodulation for traumatic brain injury. (shrink)
The Lottery Paradox has been thought to provide a reductio argument against probabilistic accounts of inductive inference. As a result, much work in artificial intelligence has concentrated on qualitative methods of inference, including default logics, which are intended to model some varieties of inductive inference. It has recently been shown that the paradox can be generated within qualitative default logics. However, John Pollock's qualitative system of defeasible inference, does avoid the Lottery Paradox by incorporating a rule designed specifically for that (...) purpose. I shall argue that Pollock's system instead succumbs to a worse disease: it fails to allow for induction at all. (shrink)
In 1978, as the protests against the Shah of Iran reached their zenith, philosopher Michel Foucault was working as a special correspondent for _Corriere della Sera_ and _le Nouvel Observateur_. During his little-known stint as a journalist, Foucault traveled to Iran, met with leaders like Ayatollah Khomeini, and wrote a series of articles on the revolution. _Foucault and the Iranian Revolution _is the first book-length analysis of these essays on Iran, the majority of which have never before appeared in English. (...) Accompanying the analysis are annotated translations of the Iran writings in their entirety and the at times blistering responses from such contemporaneous critics as Middle East scholar Maxime Rodinson as well as comments on the revolution by feminist philosopher Simone de Beauvoir. In this important and controversial account, Janet Afary and Kevin B. Anderson illuminate Foucault's support of the Islamist movement. They also show how Foucault's experiences in Iran contributed to a turning point in his thought, influencing his ideas on the Enlightenment, homosexuality, and his search for political spirituality. _Foucault and the Iranian Revolution_ informs current discussion on the divisions that have reemerged among Western intellectuals over the response to radical Islamism after September 11. Foucault's provocative writings are thus essential for understanding the history and the future of the West's relationship with Iran and, more generally, to political Islam. In their examination of these journalistic pieces, Afary and Anderson offer a surprising glimpse into the mind of a celebrated thinker. (shrink)
Louis Dupré’s death marks the passing of a philosopher who made a profound contribution to the study of Marx, Hegel, and the wider tradition, and who needs to be reread today. This memoriam acknowledges his importance through placing him in conversation with the great Marxist humanist Raya Dunayevskaya.
Marx concentrated on Western Europe and North America in his core writings, but discussions of Asia, the Middle East, Africa, Eastern Europe, and Latin America are scattered throughout his work. In the Communist Manifesto (1848) and his writings for the New York Tribune Marx posited a universal theory of historical and economic development in which non-Western societies represented backwardness, but could progress into modernity with the external impetus of the world market. Later, especially in the Grundrisse (1857-58) and the recently (...) available Ethnological Notebooks of 1879-82, Marx gradually altered this implicitly unilinear model, replacing it with a more multilinear one in which non-Western societies (in which he included Russia) might be able to embark upon an alternate form of modernity that would offer a new challenge to capitalist modernity. The basis of this alternate form was economic, in the “communal” property forms that he saw as underlying many Asian societies, as opposed to Western-style private property. (shrink)
This is an exercise in the metaphysics of causation, an essay loosely in the empiricist tradition that defends a full-blooded realism for both material and abstract objects. Fales begins, as have most empiricists, with introspectively accessible phenomena. Although he claims to take doubts about the "given" seriously, the upshot appears to be that we must accommodate the fact that the philosophically misled have had such doubts: the given has foundational status, and so is known, but that fact may itself be (...) obscured by the complexity of the phenomenal world and the difficulty of the philosophical enterprise. As Fales says, "foundationalism is by no means dead". (shrink)
This work provides a conceptual foundation for a Bayesian approach to artificial inference and learning. I argue that Bayesian confirmation theory provides a general normative theory of inductive learning and therefore should have a role in any artificially intelligent system that is to learn inductively about its world. I modify the usual Bayesian theory in three ways directly pertinent to an eventual research program in artificial intelligence. First, I construe Bayesian inference rules as defeasible, allowing them to be overridden in (...) certain contexts and therefore allowing them to play a part in a hybrid system, coexisting with non-Bayesian modes of inference. I take seriously the need to find meaningful prior probabilities for hypotheses, and elaborate means for supplying an artificial intelligence with such. And I address the computational complexity of Bayesian inference by reference to simplifications using causal networks and by allowing the probabilistic acceptance of hypotheses and subsequent qualitative inference to supplement Bayesian reasoning. The result is a Pragmatic Bayesian model of induction. (shrink)
Carter and Leslie's Doomsday Argument maintains that reflection upon the number of humans born thus far, when that number is viewed as having been uniformly randomly selected from amongst all humans, past, present and future, leads to a dramatic rise in the probability of an early end to the human experiment. We examine the Bayesian structure of the Argument and find that the drama is largely due to its oversimplification.
I consider three aspects in which machine learning and philosophy of science can illuminate each other: methodology, inductive simplicity and theoretical terms. I examine the relations between the two subjects and conclude by claiming these relations to be very close.
The common opinion has been that evolution results in the continuing development of more complex forms of life, generally understood as more complex organisms. The arguments supporting that opinion have recently come under scrutiny and been found wanting. Nevertheless, the appearance of increasing complexity remains. So, is there some sense in which evolution does grow complexity? Artificial life simulations have consistently failed to reproduce even the appearance of increasing complexity, which poses a challenge. Simulations, as much as scientific theories, are (...) obligated at least to save the appearances! We suggest a relation between these two problems, understanding biological complexity growth and the failure to model even its appearances. We present a different understanding of that complexity which evolution grows, one that genuinely runs counter to entropy and has thus far eluded proper analysis in information-theoretic terms. This complexity is reflected best in the increase in niches within the biosystem as a whole. Past and current artificial life simulations lack the resources with which to grow niches and so to reproduce evolution’s complexity. We propose a more suitable simulation design integrating environments and organisms, allowing old niches to change and new ones to emerge. (shrink)
We further develop the mathematical theory of causal interventions, extending earlier results of Korb, Twardy, Handfield, & Oppy, (2005) and Spirtes, Glymour, Scheines (2000). Some of the skepticism surrounding causal discovery has concerned the fact that using only observational data can radically underdetermine the best explanatory causal model, with the true causal model appearing inferior to a simpler, faithful model (cf. Cartwright, (2001). Our results show that experimental data, together with some plausible assumptions, can reduce the space of viable explanatory (...) causal models to one. (shrink)
Dual labor market theory, developed as an explanation of underemployment and poverty within the economy, may also be applied to the illicit economy of crime. Criminal careers are differentiated into a primary sector, with occupational stability, low failure rate, and high chances of advancement; and a secondary sector, with instability, high failure rate, and lack of "market" control. The attraction of criminal careers, the likelihood of incarceration, and the effects of law enforcement are best understood in these contexts.
Emerging cybertechnologies, such as social digibots, bend epistemological conventions of life and culture already complicated by human and animal relationships. Virtually-augmented niches of machines and organic life promise new free-energy-governed selection of intelligent digital life. These provocative eco-evolutionary contexts demand a theory of minds to characterize and validate the immersive social phenomena universally-shaping cultural affordances.
I analyze the frame problem and its relation to other epistemological problems for artificial intelligence, such as the problem of induction, the qualification problem and the "general" AI problem. I dispute the claim that extensions to logic (default logic and circumscriptive logic) will ever offer a viable way out of the problem. In the discussion it will become clear that the original frame problem is really a fairy tale: as originally presented, and as tools for its solution are circumscribed by (...) Pat Hayes, the problem is entertaining, but incapable of resolution. The solution to the frame problem becomes available, and even apparent, when we remove artificial restrictions on its treatment and understand the interrelation between the frame problem and the many other problems for artificial epistemology. I present the solution to the frame problem: an adequate theory and method for the machine induction of causal structure. Whereas this solution is clearly satisfactory in principle, and in practice real progress has been made in recent years in its application, its ultimate implementation is in prospect only for future generations of AI researchers. (shrink)
This book describes the application of Artificial Life simulation to evolutionary scenarios of wide ethical interest, including the evolution of altruism, rape and abortion, providing a new meaning to “experimental philosophy”. The authors also apply evolutionary ALife techniques to explore contentious issues within evolutionary theory itself, such as the evolution of aging. They justify these uses of simulation in science and philosophy, both in general and in their specific applications here.Evolving Ethics will be of interest to researchers, enthusiasts, students and (...) interested lay readers in the fields of Artificial Life, philosophy of science, ethics, agent- and individual-based modeling in ecology and the social sciences, computer simulation, evolutionary biology, evolutionary psychology and the social sciences.Dr Steven Mascaro is a researcher in computer simulation and Artificial Life.Dr Kevin Korb is a Reader in the Clayton School of Information Technology, Monash University.Dr Ann Nicholson is an Associate Professor in the Clayton School of Information Technology, Monash University.Owen Woodberry is a researcher in the Clayton School of Information Technology, Monash University. (shrink)
The promises of precision medicine are often heralded in the medical and lay literature, but routine integration of genomics in clinical practice is still limited. While the “last mile” infrastructure to bring genomics to the bedside has been demonstrated in some healthcare settings, a number of challenges remain — both in the receptivity of today's health system and in its technical and educational readiness to respond to this evolution in care. To improve the impact of genomics on health and disease (...) management, we will need to integrate both new knowledge and new care processes into existing workflows. This change will be onerous and time-consuming, but hopefully valuable to the provision of high quality, economically feasible care worldwide. (shrink)
Some neurotropic enteroviruses hijack Trojan horse/raft commensal gut bacteria to render devastating biomimicking cryptic attacks on human/animal hosts. Such virus-microbe interactions manipulate hosts’ gut-brain axes with accompanying infection-cycle-optimizing central nervous system disturbances, including severe neurodevelopmental, neuromotor, and neuropsychiatric conditions. Co-opted bacteria thus indirectly influence host health, development, behavior, and mind as possible “fair-weather-friend” symbionts, switching from commensal to context-dependent pathogen-like strategies benefiting gut-bacteria fitness.
Raya Dunayevskaya is hailed as the founder of Marxist-Humanism in the United States. In this new collection of her essays co-editors Peter Hudis and Kevin B. Anderson have crafted a work in which the true power and originality of Dunayevskaya's ideas are displayed. This extensive collection of writings on Hegel, Marx, and dialectics captures Dunayevskaya's central dictum that, contrary to the established views of Hegelians and Marxists, Hegel was of signal importance to the theory and practice of Marxism. The (...) Power of Negativity sheds light not only on Marxist-Humanism and the rooting of Dunayevskaya's Marxist-Humanist theories in Hegel, but also on the life of one of America's most penetrating and provocative critical thinkers. (shrink)
Bayesian networks are computer programs which represent probabilitistic relationships graphically as directed acyclic graphs, and which can use those graphs to reason probabilistically , often at relatively low computational cost. Almost every expert system in the past tried to support probabilistic reasoning, but because of the computational difficulties they took approximating short-cuts, such as those afforded by MYCIN's certainty factors. That all changed with the publication of Judea Pearl's Probabilistic Reasoning in Intelligent Systems, in 1988, which synthesized a decade of (...) research making accurate graphical probabilistic reasoning computationally achievable.Bayesian network technology is now one of the fastest growing fields of research in artificial intelligence. That it has become a publication industry in its own right is shown by a search on Google scholar :This development, together with a parallel related growth in the use of causal discovery algorithms which automate the learning of Bayesian networks from sample data, has generated considerable interest, and controversy, within the philosophy-of-science community.Three central questions bringing together AI researchers and philosophers of science are: Are Bayesian networks Bayesian? What is the relation between probability and causality? Are the assumptions behind causal discovery of Bayesian networks realistic or fantastical?Jon Williamson, as a philosopher of science with a keen interest in the technology, asks and answers these questions in his new book. Although it is self-contained, his book is not very likely as an introduction to the technology , nor is it optimal even as an introduction to the philosophical problems in interpreting Bayesian networks . Rather Williamson's book is an attempt to move the debate forward by solving the central problems of the …. (shrink)
Rolls's presentation of emotion as integral to cognition is a welcome counter to a long tradition of treating them as antagonists. His eduction of experimental evidence in support of this view is impressive. However, we find his excursion into the philosophy of consciousness less successful. Rolls gives syntactical manipulation the central role in consciousness (in stark contrast to Searle, for whom “mere” syntax inevitably falls short of consciousness), and leaves us wondering about the roles left for emotion after all.
We report an eye movement experiment that investigates the effects of collocation strength and contextual predictability on the reading of collocative phrases by L2 English readers. Thirty-eight Chinese English as foreign language learners read 40 sentences, each including a specific two-word phrase that was either a strong or weak adjective-noun collocation and was either highly predictable or unpredictable from the previous sentence context. Eye movement measures showed that L2 reading times for the collocative phrases were sensitive to both collocation strength (...) and contextual predictability. However, an interaction effect between these factors, which appeared relatively late in the eye movement record, additionally revealed that contextual predictability more strongly influenced time spent reading weak compared with strong collocations. This was most likely because the greater familiarity of strong collocations facilitated their integration, even in the absence of strong contextual constraint. We discuss the findings in terms of the value of collocations in second language learning. (shrink)
We present a probabilistic extension to active path analyses of token causation. The extension uses the generalized notion of intervention presented in : we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. It still succumbs to recent counterexamples by Hiddleston, because it (...) does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
This paper explores the relationship between psychological contract violations (PCVs) related to diversity climate and professional employee outcomes. We found that for our sample of US professionals of color including US-born African Americans, Hispanics, Asians, and Native Americans, employee perceptions of breach in diversity promise fulfillment (DPF), after controlling for more general organizational promise fulfillment (OPF), led to lower reported organizational commitment (OC) and higher turnover intentions (TI). Interactional justice partially mediated the relationship between DPF and outcomes. Procedural justice and (...) DPF interacted to influence OC of employees of color. For respondents who perceived a lack of DPF, moderate racial awareness was associated with greater PCV. We discuss the implications of the findings and provide directions for future research. (shrink)
We present a probabilistic extension to active path analyses of token causation (Halpern & Pearl 2001, forthcoming; Hitchcock 2001). The extension uses the generalized notion of intervention presented in (Korb et al. 2004): we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. (...) It still succumbs to recent counterexamples by Hiddleston (2005), because it does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
Diversity scholars have emphasized the critical role of corporate leaders for ensuring the success of diversity strategic initiatives in organizations. This study reports on business school leaders’ attributions regarding the causes for and solutions to the low representation of U.S. faculty of color in business schools. Results indicatethat leaders with greater awareness of racial issues rated an inhospitable organizational culture as a more important cause and cultural change and recruitment as more important solutions to faculty of color under-representation than did (...) less racially aware respondents. Aware leaders also rated individual minority-group member responsibility for performance a less important solution than did less racially aware respondents. Implications are discussed. (shrink)
The investigation of probabilistic causality has been plagued by a variety of misconceptions and misunderstandings. One has been the thought that the aim of the probabilistic account of causality is the reduction of causal claims to probabilistic claims. Nancy Cartwright (1979) has clearly rebutted that idea. Another ill-conceived idea continues to haunt the debate, namely the idea that contextual unanimity can do the work of objective homogeneity. It cannot. We argue that only objective homogeneity in combination with a causal interpretation (...) of Bayesian networks can provide the desired criterion of probabilistic causality. (shrink)
Stakeholder theory has received greater scholarly and practitioner attention as organizations consider the interests of various groups affected by corporate operations, including employees. This study investigates two dimensions of psychological climate, specifically perceived pay equity and diversity climate, for one such stakeholder group: racioethnic minority professionals. We examined the main effect of U.S. professionals’ of color pay equity perceptions, and the influence of perceived internal and external pay equity on turnover intentions. We also investigated the interactive effect of perceptions of (...) pay equity and diversity climate on turnover intentions. Results indicated that pay equity perceptions were negatively associated with turnover intentions. Our findings showed that perceptions of internal pay equity influenced turnover intentions but perceptions of external equity did not. Further, perceptions of pay equity and the diversity climate interactively influenced turnover intentions. Participants who reported an unfavorable diversity climate and a low perceived pay equity were most likely to report turnover intentions. Simple slope analysis for moderate pay equity also was significant. When perceived pay equity was high, favorability of the diversity climate did not affect turnover intentions. The findings have useful practical implications. When pay was perceived as equitable, participants appeared to pay less attention to the diversity climate. Employee pay equity perceptions may be malleable; sharing information with employees about pay levels during performance reviews may enhance perceptions of pay equity. The findings suggest that, consistent with stakeholder theory, organizations should attend to perceptions of both pay equity and diversity climate when striving to minimize the turnover intentions of professionals of color. (shrink)