I am writing this to help people who want to write Gnumeric scripts in Python, without digging through the Gnumeric C documentation and source code. The installation section is targetted mainly for Debian, but I hope the alternate instructions will work on other systems. Travis Whitton wrote a nice Python/Gnumeric guide for the old API in Gnumeric 1.0. Here's the new one written when 1.1.20 was being released. There are new features since 1.1.18! And note, the API is still experimental (...) and may change. With luck I'll update this HOWTO. (shrink)
Note to readers: Originally I thought there was a stronger link between Maclaurin and Hume, however I now think it clear that Hume is not taking his mechanics out of Maclaurin’s Account. Although I still have found Maclaurin useful in interpreting Hume, I suspect this draft suffers somewhat from ambivalence. There are still similarities, and possible avenues of influence, arguing that Hume was not ignorant of the new mechanics, but it also becomes clear that he did not understand it: although (...) he adopts the Newtonian measure of force, he misapplies it. (shrink)
Artificial Intelligence (AI) and Philosophy of Science share a fundamental problem—that of understanding causality. Bayesian network techniques have recently been used by Judea Pearl in a new approach to understanding causality and causal processes (Pearl, 2000). Pearl’s approach has great promise, but needs to be supplemented with an explicit account of causal interaction. Thus far, despite considerable interest, philosophy has provided no useful account of causal interaction. Here we provide one, employing the concepts of Bayesian networks. With it we demonstrate (...) the failure of one of philosophy’s more sophisticated attempts to deal with the concept of causal interaction, that of Ellery Eells’ Probabilistic Causality (1991). (shrink)
HOWTO get Mac-On-Linux (MOL) running under Debian when using a BenH kernel. In the Debian way, grasshopper. The good news is that getting a basic MOL running takes about 6 commands. The bad news is that to get everything working under MOL will almost certainly involve a recompile, some extra packages, some script editing, and a bunch of MOL reboots. But hopefully this makes all that easier.
Artificial Intelligence (AI) and Philosophy of Science share a fundamental problem—understanding causality. Bayesian networks have recently been used by Judea Pearl in a new approach to understanding causality (Pearl, 2000). Part of understanding causality is understanding causal interaction. Bayes nets can represent any degree of causal interaction, and researchers normally try to limit interactions, usually by replacing the full CPT with a noisy-OR function. But we show that noisy-OR and another common model are merely special cases of the general linear (...) systems definition of noninteraction. However, they apply in different situations, and we can measure the degree of causal interaction relative to any such model. (shrink)
Using Bayesian network causal models, we provide a simple general account of probabilistic causal interaction. We also detail problems in the leading accounts by Ellery Eells, and any others which require valence reversals, contextual unanimity, or average effects.
Part of our fascination with the Maya can be attributed to the fact that they were literate . . . that is, the Classic Maya possessed a visible language that consisted of letters and a grammar, and one of the products of their literacy was the book. (Aveni 1992b, p.3).
We present a minimum message length (MML) framework for trajectory partitioning by point selection, and use it to automatically select the tolerance parameter ε for Douglas-Peucker partitioning, adapting to local trajectory complexity. By examining a range of ε for synthetic and real trajectories, it is easy to see that the best ε does vary by trajectory, and that the MML encoding makes sensible choices and is robust against Gaussian noise. We use it to explore the identification of micro-activities within a (...) longer trajectory. This MML metric is comparable to the TRACLUS metric – and shares the constraint of abstracting only by omission of points – but is a true lossless encoding. Such encoding has several theoretical advantages – particularly with very small segments (high frame rates) – but actual performance interacts strongly with the search algorithm. Both differ from unconstrained piecewise linear approximations, including other MML formulations. (shrink)
We present a probabilistic extension to active path analyses of token causation (Halpern & Pearl 2001, forthcoming; Hitchcock 2001). The extension uses the generalized notion of intervention presented in (Korb et al. 2004): we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. (...) It still succumbs to recent counterexamples by Hiddleston (2005), because it does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesian networks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late preëmption and other cases (...) that have proved problematic for causal models. (shrink)
James McAllister’s 2003 article, “Algorithmic randomness in empirical data” claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference, which shows that the theory which best compresses the data is the one with highest posterior probability, and the best explanation of the data.
Computer-based argument mapping greatly enhances student critical thinking, more than tripling absolute gains made by other methods. I describe the method and my experience as an outsider. Argument mapping often showed precisely how students were erring (for example: confusing helping premises for separate reasons), making it much easier for them to fix their errors.
The investigation of probabilistic causality has been plagued by a variety of misconceptions and misunderstandings. One has been the thought that the aim of the probabilistic account of causality is the reduction of causal claims to probabilistic claims. Nancy Cartwright (1979) has clearly rebutted that idea. Another ill-conceived idea continues to haunt the debate, namely the idea that contextual unanimity can do the work of objective homogeneity. It cannot. We argue that only objective homogeneity in combination with a causal interpretation (...) of Bayesian networks can provide the desired criterion of probabilistic causality. (shrink)