Graduate studies at Western
|Abstract||The Fast Casual Inference (FCI) algorithm searches for features common to observationally equivalent sets of causal directed acyclic graphs. It is correct in the large sample limit with probability one even if there is a possibility of hidden variables and selection bias. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the data set. This affects both the speed of the algorithm and the accuracy of the algorithm on small samples, because tests of independence conditional on large numbers of variables have very low power. In this paper, I prove that the FCI algorithm can be interrupted at any stage and asked for output. The output from the interrupted algorithm is still correct with probability one in the large sample limit, although possibly less informative (in the sense that it answers “Can’t tell” for a larger number of questions) than if the FCI algorithm had been allowed to continue uninterrupted.|
|Keywords||No keywords specified (fix it)|
No categories specified
(categorize this paper)
|Through your library||Only published papers are available at libraries|
Similar books and articles
Jiji Zhang & Peter Spirtes, A Transformational Characterization of Markov Equivalence for Directed Maximal Ancestral Graphs.
Peter Spirtes, A Polynomial Time Algorithm for Determining Dag Equivalence in the Presence of Latent Variables and Selection Bias.
Bert Leuridan (2009). Causal Discovery and the Problem of Ignorance. An Adaptive Logic Approach. Journal of Applied Logic 7 (2):188-205.
Peter Spirtes (2005). Graphical Models, Causal Inference, and Econometric Models. Journal of Economic Methodology 12 (1):3-34.
Added to index2009-01-28
Total downloads18 ( #74,653 of 739,163 )
Recent downloads (6 months)1 ( #61,778 of 739,163 )
How can I increase my downloads?