Abstract
Interpretation is the process whereby a hearer reasons to an interpretation of a speaker's discourse. The hearer normally adopts a credulous attitude to the discourse, at least for the purposes of interpreting it. That is to say the hearer tries to accommodate the truth of all the speaker's utterances in deriving an intended model. We present a nonmonotonic logical model of this process which defines unique minimal preferred models and efficiently simulates a kind of closed‐world reasoning of particular interest for human cognition. Byrne's “suppression” data (Byrne, 1989) are used to illustrate how variants on this logic can capture and motivate subtly different interpretative stances which different subjects adopt, thus indicating where more fine‐grained empirical data are required to understand what subjects are doing in this task. We then show that this logical competence model can be implemented in spreading activation network models. A one pass process interprets the textual input by constructing a network which then computes minimal preferred models for (3‐valued) valuations of the set of propositions of the text. The neural implementation distinguishes easy forward reasoning from more complex backward reasoning in a way that may be useful in explaining directionality in human reasoning.