Human benchmarks on ai's benchmark problems

Default reasoning occurs when the available information does not deductively guarantee the truth of the conclusion; and the conclusion is nonetheless correctly arrived at. The formalisms that have been developed in Artificial Intelligence to capture this mode of reasoning have suffered from a lack of agreement as to which non-monotonic inferences should be considered correct; and so Lifschitz 1989 produced a set of “Nonmonotonic Benchmark Problems” which all future formalisms are supposed to honor. The present work investigates the extent to which humans follow the prescriptions set out in these Benchmark Problems.
Keywords No keywords specified (fix it)
Categories No categories specified
(categorize this paper)
 Save to my reading list
Follow the author(s)
My bibliography
Export citation
Find it on Scholar
Edit this record
Mark as duplicate
Revision history Request removal from index
Download options
PhilPapers Archive

Upload a copy of this paper     Check publisher's policy on self-archival     Papers currently archived: 15,865
External links
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles
Beihai Zhou & Yi Mao (2006). A Base Logic for Default Reasoning. Frontiers of Philosophy in China 1 (4):688-709.

Monthly downloads

Added to index


Total downloads

7 ( #291,530 of 1,724,878 )

Recent downloads (6 months)

4 ( #167,173 of 1,724,878 )

How can I increase my downloads?

My notes
Sign in to use this feature

Start a new thread
There  are no threads in this forum
Nothing in this forum yet.