In this note we develop a method for constructing finite totally-ordered m-zeroids and prove that there exists a categorical equivalence between the category of finite, totally-ordered m-zeroids and the category of pseudo Łukasiewicz-like implicators.
In 1964, the American Medical Association invited liberal theologian Abraham Joshua Heschel to address its annual meeting in a program entitled “The Patient as a Person” . Unsurprisingly, in light of Heschel’s reputation for outspokenness, he launched a jeremiad against physicians, claiming: “The admiration for medical science is increasing, the respect for its practitioners is decreasing. The depreciation of the image of the doctor is bound to disseminate disenchantment and to affect the state of medicine itself” [1, p. 35]. (...) Heschel’s reference to “disenchantment” suggests that he may have been familiar with the work, or at least the outlook, of sociologist Max Weber, whose 1917 address “Science as a Vocation” portrays the modern world as disenchanted by the progress of rationalism. Heschel’s life’s vocation had been to uncover the inner meaning of religious faith and to translate that faith into principled action. Heschel saw disenchantment not as an inescapable aspect of modern life but rather as the byproduct of physicians’ conscious choices to seek worldly success and material comfort. Yet, because of their privileged position as witnesses to human vulnerability, physicians possess an obligation to develop their own personhood, to re-enchant medicine, and through medicine to spark a positive transformation in all of modern life. As Heschel says, “The doctor must realize the supreme nobility of his vocation, to cultivate a taste for the pleasures of the soul. … The doctor is a major source of moral energy affecting the spiritual texture and substance of the entire society” [1, pp. 34, 38]. While Heschel’s conception of the physician’s role is romanticized and idealized, changes in the organization and practice of medicine have validated his concerns. (shrink)
This book brings together an account of the structure of time with an account of our language and thought about time. It is a wide-ranging examination of recent issues in metaphysics, philosophy of language, and the philosophy of science and presents a compelling picture of the relationship of human beings to the spatiotemporal world.
For over a decade I have been arguing that Deweyan democracy fails an intuitive test for political legitimacy.1 According to this test, a political order can be legitimate only if the principles underlying its most fundamental institutions are insusceptible to reasonable rejection. Crucially, reasonable functions here as a technical term; a principle is reasonably rejectable when its rejection is consistent with embracing the ideal of a constitutional democracy as a fair system of social cooperation among free and equal moral persons. (...) From this emerges the corollary that a person is reasonable to the extent she seeks a political order whose basis is beyond reasonable rejection. The challenge for democratic theory... (shrink)
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a (...) single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models. Key Words: additive clustering; Bayesian inference; categorization; concept learning; contrast model; features; generalization; psychological space; similarity. (shrink)
In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian (...) inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst-case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling-based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource-constrained cognition. (shrink)
Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that (...) higher‐level theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. *Received July 2009; revised October 2009. †To contact the authors, please write to: Leah Henderson, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 32D‐808, Cambridge, MA 02139; e‐mail: firstname.lastname@example.org. (shrink)
If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the (...) objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning. (shrink)
In order to receive controlled pain medications for chronic non-oncologic pain, patients often must sign a “narcotic contract” or “opioid treatment agreement” in which they promise not to give pills to others, use illegal drugs, or seek controlled medications from health care providers. In addition, they must agree to use the medication as prescribed and to come to the clinic for drug testing and pill counts. Patients acknowledge that if they violate the opioid treatment agreement, they may no longer receive (...) controlled medications. OTAs have been widely implemented since they were recommended by multiple national bodies to decrease misuse and diversion of narcotic medications. But critics argue that OTAs are ethically suspect, if not unethical, and should be used with extreme care if at all. We agree that OTAs pose real dangers and must be implemented carefully. But we also believe that the most serious criticisms stem from a mistaken understanding of OTAs’ purpose and ethical basis. (shrink)
Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form—where form could be a tree, ring, chain, grid, etc.. Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we (...) introduce a new computational model of how organizing structure can be discovered, utilizing a broad hypothesis space with a preference for sparse connectivity. Given that the inductive bias is more general, the model's initial knowledge shows little qualitative resemblance to some of the discoveries it supports. As a consequence, the model can also learn complex structures for domains that lack intuitive description, as well as predict human property induction judgments without explicit structural forms. By allowing form to emerge from sparsity, our approach clarifies how both the richness and flexibility of human conceptual organization can coexist. (shrink)