Probabilities for AI
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
Probability plays an essential role in many branches of AI, where it is typically assumed that we have a complete probability distribution when addressing a problem. But this is unrealistic for problems of real-world complexity. Statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q&R), and we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability calculus for prob(P/Q&R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q&R)? A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability — nomic probability — there are principles of “probable probabilities” that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r,s:a) such that if prob(P/Q) = r, prob(P/R) = s, and prob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r,s:a)..
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Ernest W. Adams (1996). Four Probability-Preserving Properties of Inferences. Journal of Philosophical Logic 25 (1):1 - 24.
Ian Evans, Don Fallis, Peter Gross, Terry Horgan, Jenann Ismael, John Pollock, Paul D. Thorn, Jacob N. Caton, Adam Arico, Daniel Sanderman, Orlin Vakerelov, Nathan Ballantyne, Matthew S. Bedke, Brian Fiala & Martin Fricke (2007). An Objectivist Argument for Thirdism. Analysis 68 (2):149-155.
Robert C. Stalnaker (1970). Probability and Conditionals. Philosophy of Science 37 (1):64-80.
Aidan Lyon (2010). Deterministic Probability: Neither Chance nor Credence. Synthese 182 (3):413-432.
John Pollock (2011). Reasoning Defeasibly About Probabilities. Synthese 181 (2):317 - 352.
Added to index2009-02-26
Total downloads41 ( #100,189 of 1,902,047 )
Recent downloads (6 months)13 ( #55,057 of 1,902,047 )
How can I increase my downloads?