1 Introduction

We place the work in this paper against the background of investigations in AI, Game Theory and Logic on bounded rationality and the problem of logical omniscience (Fagin et al. 1995). Models of agents with unlimited inferential powers work well for certain types of distributed systems but are not sufficient to model real human reasoning and its limitations. A number of empirical studies on human reasoning reveal that subjects are systematically fallible in reasoning tasks (Stanovich and West 2000; Stenning and van Lambalgen 2008). These provide us with evidence for the fact that humans hold very nuanced propositional attitudes and performing deductive reasoning steps can only be done within a limited time-frame and at the cost of real cognitive effort. In this context a case can be made for logically competent but not infallible agents who adhere to a standard of Minimal Rationality (Cherniak 1986). Such an agent can make some, but not necessarily all, of the apparently appropriate inferences. In specifying what makes inferences (in)feasible, empirical facts pertaining to the availability of cognitive resources are crucial; for example, it is natural to take into account limitations of time and memory, when setting the standard of what the agent should achieve. As we approach this topic from the context of logic, we design a normative model, rather than a purely descriptive one.

As an illustration we consider the standard Muddy Children Puzzle (Fagin et al. 1995) which is based on the unrealistic assumption that children are unbounded reasoners and perfect logicians, who can perform demanding deductive steps all at once.

Suppose that n children are playing together and k of them get mud on their foreheads. Each child can see the mud on the others but not on her own forehead. First their father announces “at least one of you is muddy” and then asks over and over “does any of you know whether you are muddy?” Assuming that the kids are unbounded reasoners, the first \(k-1\) times the father asks, everybody responds “no” but the k-th time all the muddy children answer “yes”.

We support the argument in Parikh (1987) stating that the limited capacity of humans, let alone children, can well lead to outcomes of the puzzle that are not in agreement with the standard textbook analysis. The mixture of reasoning steps a child has to take needs to be “situated” in specific bounds of time, memory, etc. As such, it is our aim in this paper to design a cognitively informed model of the dynamics of inference. To achieve this, we use tools from Dynamic Epistemic Logic (DEL) (Baltag and Renne 2016; Baltag and Smets 2008; van Benthem 2011; van Ditmarsch et al. 2007). DEL is equipped with dynamic operators, which can be used to denote applications of inference rules. We give a semantics of these operators via plausibility models (Baltag and Smets 2008). Our models are supplemented by (a) impossible worlds (not closed under logical consequence), suitably structured according to the effect of inference rules, and (b) quantitative components capturing the agent’s cognitive capacity and the costs of rules with respect to certain resources (e.g. memory, time). Note that our work, while building further on the early approaches based on impossible worlds (Hintikka 1975) to address logical omniscience, tries to overcome their main criticism of ignoring the agents’ logical competence and lacking explanatory power in terms of what really comes into play whenever we reason. In our work, deductive reasoning is reflected in the dynamic truth clauses. These include resource-sensitive ‘preconditions’ and utilize a model update mechanism that modifies the set of worlds and their plausibility, but also reduces cognitive capacity by the appropriate cost. We therefore show that an epistemic state is not expanded effortlessly, but, instead, via applications of rules, to the extent that they are cognitively affordable. We illustrate this formal setting on the above mentioned muddy children scenario for bounded rational children for the case \(k=2\). We further show that our models can be reduced to awareness-like plausibility structures that validate the same formulas and a sound and complete axiomatization is given with respect to them. This paper builds further on the main ideas first presented at the WoLLIC 2018 (Smets and Solaki 2018). More specifically, it expands on Smets and Solaki (2018) by focusing also on a mixture of different types of reasoning tasks. Such tasks combine bounded reasoning with the revision of epistemic and doxastic states that occurs when the agent hears or observes new external information.

An arbitrary syntactic awareness-filter, used to discern explicit attitudes as in Fagin and Halpern (1987), cannot work for our purposes for it cannot be associated with logical competence. Even if ad-hoc modifications are imposed on standard awareness models, by e.g. awareness closure under subformulas, some forms of the problem are retained. A notable exception where awareness is affected by reasoning is given in Velázquez-Quesada (2011); we will pursue a similar rule-based approach in this paper. In relation to other work on tracking a fallible agent’s reasoning and cognitive effort, we refer to Alechina and Logan (2009), Bjerring and Skipper (2018), and Rasmussen (2015). The first of these papers accounts for reasoning processes through, among others, inference-based state-transitions but their composition is not specified. The second includes operators for the agent’s applications of inference rules, accompanied by cognitive costs, but no semantic interpretation is given. The third uses operators standing for a number of reasoning steps, and an impossible-worlds semantics, but it is not clear how the number of steps can be determined nor what makes reasoning halt after that. In contrast, our work aims at an elaborate unfolding of reasoning processes, that is necessary in order to provide more cognitively plausible explanations on why such processes eventually halt. In doing so, we combine the benefits of plausibility models and impossible worlds in the realistic modelling of competent but bounded reasoners. We then suggest how the technical treatment of the resulting framework can be facilitated, and embed the effect of external information in it.

The paper is structured as follows: in Sect. 2 we introduce our framework and discuss its contribution to the highlighted topics. The reduction laws (i.e. rewrite-rules) and the axiomatization are given in Sect. 3. In Sect. 4 we explain how the framework is combined with the dynamics of interaction and we finally present our conclusions and directions for further work in Sect. 5.

2 The Logical Framework to Model the Effort of Inference Steps

Our framework has two technical aims: (a) invalidating the closure properties of logical omniscience, and (b) elucidating the details of agents engaging in a step-wise, orderly, effortful reasoning process.

2.1 Syntax

Let \(\mathcal {L}_p\) denote a standard propositional language based on a set of atoms \(\varPhi \). Using this notation we first define inference rules:

Definition 1

(Inference rule) Given \(\phi _1, \ldots , \phi _n, \psi \in \mathcal {L}_p\), an inference rule \(\rho \) is a formula of the form \(\{\phi _1, \ldots , \phi _n\} \leadsto \psi \), read as “whenever every formula in \(\{\phi _1, \ldots , \phi _n \}\) is true, \(\psi \) is also true”.

We use \({\textit{pr}}(\rho )\) and \({\textit{con}}(\rho )\) to abbreviate, respectively, the set of premises and the conclusion of a rule \(\rho \) and \(\mathcal {L}_R\) to denote the set of all inference rules. To identify the truth-preserving rules, we define:

Definition 2

(Translation) The translation of a rule \(\rho \) is given by the following implication in \(\mathcal {L}_p\), i.e. \({\textit{tr}}(\rho ) := \bigwedge _{\phi \in {\textit{pr}}(\rho )} \phi \rightarrow {\textit{con}}(\rho )\)

We introduce the language \(\mathcal {L}\), extending \(\mathcal {L}_p\) with two epistemic modalities: K for conventional knowledge, and \(\Box \) for defeasible knowledge. As argued in Baltag and Smets (2008), it is philosophically interesting to include both attitudes in one system. While K represents an agent’s full introspective and factive attitude, \(\Box \) is factive but not fully introspective. This weaker notion satisfies the S4-properties and is inspired by the defeasibility analysis of knowledge (Lehrer 2000; Stalnaker 2006), while K satisfies the S5-properties and is considered to be infallible and indefeasible. Regarding the changes of the agent’s epistemic state, induced by deductive reasoning, we introduce dynamic operators labelled by inference rules, of the form \(\langle \rho \rangle \).

Definition 3

(Language \(\mathcal {L}\)) The set of terms T is defined as \(T := \{ c_\rho \mid \rho \in \mathcal {L}_R \} \cup \{{cp} \}\) with elements for all the cognitive costs \(c_\rho \) of inference rules \(\rho \in \mathcal {L}_R\), and the cognitive capacity cp. Given a set of propositional atoms \(\varPhi \), the language \(\mathcal {L}\) is defined by:

$$\begin{aligned} \phi \,{:}{:}{=}\,p | z_1 s_1 + \cdots + z_n s_n \ge \mathrm {c} | \lnot \phi | \phi \wedge \phi | K \phi | \Box \phi | A \rho | \langle \rho \rangle \phi \end{aligned}$$

where \(p \in \varPhi \), \(z_1, \ldots , z_n \in \mathbb {Z}\), \(\mathrm {c} \in \mathbb {Z}^r\), \(s_1, \ldots , s_n \in T\), and \(\rho \in \mathcal {L}_R\).

The language comprises linear inequalities of the form \(z_1 s_1 + \ldots + z_n s_n \ge \mathrm {c}\), to deal with cognitive effort via comparisons of costs and capacity.Footnote 1 The modalities K and \(\Box \) represent infallible and defeasible knowledge, respectively.Footnote 2 The operator A indicates the agent’s availability of inference rules, i.e. \(A \rho \) denotes that the agent has acknowledged rule \(\rho \) as truth-preserving (and is capable of applying it). The dynamic operators of the form \(\langle \rho \rangle \) are such that \(\langle \rho \rangle \phi \) reads “after applying the inference rule \(\rho \), \(\phi \) is true”. In \(\mathcal {L}\), formulas involving \(\le \), \(=\), −, \(\vee \), \(\rightarrow \) can be defined as usual. Moreover, a formula of the form \(s_1 \ge s_2\) abbreviates \(s_1 - s_2 \ge \overline{0}\).

2.2 Plausibility Models

Our semantics is based on plausibility models (Baltag and Smets 2008). In line with Spohn (1988) we use a mapping from a given set of worlds to the class of ordinals \(\varOmega \) to derive the plausibility ordering. The model is augmented by impossible worlds, which need not be closed under logical consequence. However, while the agent’s fallibility is not precluded—it is in fact witnessed by the inclusion of impossible worlds—it is reasoning, i.e. applications of rules, that gradually eliminates the agent’s ignorance. As a starting point, we adopt a Minimal Consistency requirement, ruling out ‘explicit contradictions’ that are obvious cases of inconsistency for any (minimally) rational agent.

In order to capture the increasing cognitive load of deductive reasoning in line with empirical findings, we first introduce two parameters: (i) the agent’s cognitive resources, and (ii) the cognitive cost of applying inferential rules. Regarding (i), we will use \( Res \) to denote the set of resources, which can contain memory, time, attention, etc. and let \(r := |{\textit{Res}}|\) be the number of resources considered in the modelling. Regarding (ii), the cognitive effort of the agent with respect to each resource is captured by a function \(c: \mathcal {L}_R \rightarrow \mathbb {N}^r\) that assigns a cognitive cost to each inference rule. As the results of experiments show, not all inference rules require equal cognitive effort: Johnson-Laird et al. (1992), Rips (1994), and Stenning and van Lambalgen (2008) claim that the asymmetry in performance observed when a subject uses Modus Ponens and Modus Tollens is suggestive of an increased difficulty to apply the latter.Footnote 3

Every model comes equipped with the parameters \({\textit{Res}}\) and c. We also introduce a cognitive capacity component to capture the agent’s available power with respect to each resource. As resources are depleted while reasoning evolves, capacity is not constant, but it changes after each reasoning step. The choice for an agent-specific capacity that is affected by reasoning steps is in accord with connections between capacity and performance in deductive reasoning (Bara et al. 1995).

Concrete assignments of the different cognitive costs and capacity rely on empirical research. We hereby adopt a simple numerical approach to the values of resources because this seems convenient in terms of capturing the availability and cost of time and it is also aligned with research on memory (Cowan 2001; Miller 1956).Footnote 4

Definition 4

(Plausibility model) A plausibility model is a tuple \(M= \langle W^P,\) \(W^I,{\textit{ord}}, V, R, cp \rangle \) consisting of \(W^P, W^I\), non-empty sets of possible and impossible worlds respectively. \({\textit{ord}}\) is a function from \(W:= (W^P \cup W^I)\) to the class of ordinals \(\varOmega \) assigning an ordinal to each world. \(V: W \rightarrow \mathcal {P}(\mathcal {L})\) is a valuation function mapping each world to a set of formulas. \(R: W \rightarrow \mathcal {P} (\mathcal {L}_R)\) is a function indicating the rules the agent has available (i.e. has acknowledged as truth-preserving) at each world. Cognitive capacity is denoted by \( cp \), i.e. \( cp \in \mathbb {Z}^r\), indicating what the agent is able to afford with regard to each resource.

Regarding possible worlds, the valuation function assigns the set of atoms that are true at the world. Regarding impossible worlds, the function assigns all formulas, atomic or complex, true at the world.Footnote 5 The function \({\textit{ord}}\) induces a plausibility ordering, i.e. a binary relation on W: for \(w, u \in W\): \( w \ge u\) iff \({\textit{ord}}(w) \ge {\textit{ord}}(u)\); its intended reading is “w is no more plausible than u”. Hence, the smaller the ordinal, the more plausible the world. The induced relation \(\ge \) is reflexive, transitive, connected and conversely well-founded.Footnote 6 We define \(\sim \), representing epistemic indistinguishability: \(w \sim u \text { iff } w \ge u \text { or } u \ge w\), i.e. \(\ge \)-comparable states are epistemically indistinguishable for the agent (Ditmarsch et al. 2015, Chapter 7).

To ensure that the rules available to the agent are truth-preserving, and assuming that propositional formulas are evaluated as usual in possible worlds, we impose Soundness of Rules: for every \(w \in W^P\), if \(\rho \in R(w)\) then \(M, w \models {\textit{tr}}(\rho )\). We also need a condition to hardwire the effect of deductive reasoning in the model. To that end, we take:

Definition 5

(Propositional truths) Let M be a model and \(w \in W\) a world of the model. If \(w \in W^P\), its set of propositional truths is \(V^* (w)= \{ \phi \in \mathcal {L}_p \mid M, w \models \phi \}\). If \(w \in W^I\), \(V^*(w) = \{ \phi \in \mathcal {L}_p \mid \phi \in V(w) \}\).

Based on \(V^*\), which is determined by V, we impose Succession on the model: for every \(w \in W\), if (i) \({\textit{pr}}(\rho ) \subseteq V^*(w)\), (ii) \(\lnot {\textit{con}}(\rho ) \not \in V^*(w)\), (iii) \({\textit{con}}(\rho ) \ne \lnot \phi \) for all \(\phi \in V^*(w)\) then there is some \(u \in W\) such that \(V^*(u)=V^*(w) \cup \{ {\textit{con}}(\rho ) \}\).

Definition 6

(\(\rho \) -radius) The \(\rho \)-radius of a world w is given by:Footnote 7

$$\begin{aligned} w^{\rho } := {\left\{ \begin{array}{ll} \{w\},\quad \text {{if}}\;{\textit{pr}}(\rho ) \not \subseteq V^*(w) \\ \emptyset ,\quad \text {{if}}\;{\textit{pr}}(\rho ) \subseteq V^*(w)\;\text {{and}}\;(\lnot {\textit{con}}(\rho ) \in V^*(w)\;\text {{or}}\;{\textit{con}}(\rho )= \lnot \phi \\ \text {for some}\;\phi \in V^*(w)) \\ \{u \mid u\;\text {is the successor of}\;w\},\;\text {{if}}\;{\textit{pr}}(\rho ) \subseteq V^*(w)\;\text {and}\;\lnot {\textit{con}}(\rho ) \not \in V^*(w) \\ \text {{and}}\;{\textit{con}}(\rho ) \ne \lnot \phi \; \text {{for all}}\;\phi \in V^*(w)\\ \end{array}\right. } \end{aligned}$$

The \(\rho \)-radius, inspired by Bjerring and Skipper (2018), represents how the rule \(\rho \) triggers an informational change and its element, if it exists, is called \(\rho \)-expansion. A rule whose premises are not true at a world does not trigger any change, this is why the only expansion is the world itself. A rule that leads to an explicit contradiction forms the empty radius as is arguably the case for minimally rational agents. If the conditions of Succession are met, the radius contains the new “enriched” world. Due to the injectiveness of \(V_p\) and \(V_i\), a world’s \(\rho \)-expansion is unique. As \(\rho \)-expansions expand the state from which they originate, inferences are not defeated as reasoning steps are taken, hence Succession warrants monotonicity, to the extent that Minimal Consistency is respected. Note that w’s \(\rho \)-expansion amounts to itself for \(w \in W^P\) (due to the deductive closure of possible worlds) while an impossible world’s \(\rho \)-expansion is another impossible world.

2.3 Model Transformations and Semantic Clauses

To evaluate \(\langle \rho \rangle \phi \), we have to examine the truth value of \(\phi \) in a transformed model, defined in such a way to capture the effect of applying \(\rho \). Roughly, a pointed plausibility model \((M', w)\) (which consists of a plausibility model and a point indicating the real world) is the \(\rho \)-update of a given pointed plausibility model, whenever the set of worlds is replaced by the worlds reachable by an application of \(\rho \) on them, while the ordering is accordingly adapted. That is, if a world u was initially entertained by the agent, but after an application of \(\rho \) does not “survive”, then it is eliminated. This world must have been an impossible world and a deductive step uncovered its impossibility. Once such worlds are ruled out, the initial ordering is preserved to the extent that it is unaffected by the application of the rule. More concretely, let \(M= \langle W^P, W^I, {\textit{ord}}, V, R, cp \rangle \) be a plausibility model and (Mw) the pointed model based on w. The updated \(M^\rho \) model is given via the following:

  1. Step 1

    Given a rule \(\rho \), \(W^{\rho } := \bigcup _{v \in W} v^{\rho } \). In words, \(W^\rho \) consists of the \(\rho \)-expansions of the worlds initially entertained by the agent. So the \(\rho \)-updated pointed model \((M^\rho ,w)\) should be such that its set of worlds is \(W^{\rho }\). As observed above, any elimination of worlds is in fact an elimination affecting the set \(W^I\).

  2. Step 2

    We now develop the new ordering \({\textit{ord}}^{\rho }\) following the application of the inference rule. Take \(u \in W^{\rho }\). This means that there is at least one \(v \in W\) such that \(\{u\}= v^{\rho }\). Denote the set of such v’s by N. Then \({\textit{ord}}^{\rho }(u)={\textit{ord}}(z)\) for \(z \in {\textit{min}} (N)\). Therefore, if a world is in \(W^{\rho }\), then it takes the position of the most plausible of the worlds from which it originated.

  3. Step 3

    V and R are simply restricted to the worlds in \(W^{\rho }\) and \( cp ^{\rho } := cp -c(\rho )\). Again, for \(u,v \in W^{\rho }\), we say: \(u \ge ^{\rho } v\) iff \({\textit{ord}}^{\rho } (u) \ge {\textit{ord}}^{\rho } (v)\). It is easy to check that all the required properties are preserved.

Prior to defining the truth clauses we need to assign interpretations to the terms in T. Their intended reading is that those of the form \(c_\rho \) correspond to the cognitive costs of inference rules whereas those of the form \( cp \) correspond to the agent’s cognitive capacity. This is why \( cp \) is used both as a model component and as a term of our language. The use can be understood from the context.

Definition 7

(Interpretation of terms) Given a model M, the terms of T are interpreted as follows: \( cp ^M = cp \) and \(c^M_{\rho }= c(\rho )\).

Our intended reading of \(\ge \) is that \(s \ge t\) iff every i-th component of s is greater or equal than the i-th component of t. The semantic clause for a rule-application should reflect that the rule must be “affordable” to be executable; the agent’s cognitive capacity must endure the resource consumption caused by firing the rule. The semantics is finally given by:

Definition 8

(Plausibility semantic clauses) The following clauses inductively define when a formula \(\phi \) is true at w in M (notation: \(M, w \models \phi \)) and when \(\phi \) is false at w in M (notation: ). For \(w \in W^I\): \(M,w \models \phi \) iff \( \phi \in V (w)\), and iff \(\lnot \phi \in V(w)\). For \(w \in W^P\), given that the boolean cases are standard:

\(M,w \models p\) iff \(p \in V(w)\), where \(p \in \varPhi \)

iff \(M,w \not \models \phi \)

\(M,w \models K \phi \) iff \(M,u \models \phi \) for all \(u \in W\)

\(M, w\models A \rho \) iff \(\rho \in R(w)\)

\(M,w \models \Box \phi \) iff \(M,u \models \phi \) for all \(u \in W\) such that \(w \ge u\)

\(M,w \models \langle \rho \rangle \phi \) iff \(M,w\models ( cp \ge c_\rho )\), \(M,w \models A \rho \) and \(M^\rho , w \models \phi \)

\(M, w \models z_1 s_1 + \cdots + z_n s_n \ge \mathrm {c}\) iff \(z_1 s^M_1 + \ldots + z_n s^M_n \ge \mathrm {c}\)

Validity is defined with respect to possible worlds only. The truth clause for knowledge is standard, except that it also quantifies over impossible worlds. The truth of rule-availability is determined by the corresponding model function. It is then evident that the truth conditions for epistemic assertions prefixed by a rule \(\rho \) are sensitive to the idea of resource-boundedness, unlike plain assertions. The latter require that \(\phi \) is the case throughout the quantification set, even at worlds representing inconsistent/incomplete scenarios. The former ask that the rule is affordable, available to the agent, and that \(\phi \) follows from the accessible worlds via \(\rho \). Since the agent also entertains impossible worlds, she has to take a step in order to gradually minimize her ignorance.

2.4 Discussion

These constructions overcome logical omniscience, while still accounting for how we perform inferences lying within suitable applications of rules. In particular, the argument of impossible worlds suffices to invalidate the closure principles. Moreover, the truth conditions for \(\langle \ddag \rangle \spadesuit \phi \), where \(\langle \ddag \rangle \) abbreviates a sequence of inference rules and \(\spadesuit \) stands for a propositional attitude such as K or \(\Box \), demonstrate that an agent can come to know \(\phi \) via following an affordable and available reasoning track. In fact, the rule-sensitivity, the measure on cognitive capacity and the way it is updated allow us to practically witness to what extent reasoning evolves. Besides, running out of resources depends not only on the number but also on the kind and chronology of rules. Our approach takes these factors into account and explains how the agent exhausts her resources while reasoning.

Unlike (Bjerring and Skipper 2018; Duc 1997) we abstain from a generic notion of reasoning process and we do not presuppose the existence of an arbitrary cutoff on reasoning. Instead, we account explicitly for (a) specific rules available to the agent, (b) their individual applications, (c) their chronology, and (d) their cognitive consumption. This elaborate analysis is crucial in bridging epistemic frameworks with empirical facts for it exploits studies in psychology of reasoning that usually study individual inference rules in terms of cognitive difficulty.Footnote 8 Furthermore, the enterprise of providing a semantics contributes to Rasmussen (2015)’s attempt, who tracks reasoning processes, but lacks a principled way to defend his selection of axioms. Constructing a semantic model that captures the change triggered by rule-applications allows for a definition of validity important in assessing the adequacy of the solution.

We will illustrate our framework on the Muddy Children Puzzle, highlighted in the introduction. We analyze the failure of applying a sequence of rules in the \(k=2\) scenario, attributed to the fact that the first rule applied is so cognitively costly for a child that her available time expires before she can apply the next. It thus becomes clear why in even more complex cases (e.g. for \(k > 2\)) human agents are likely to fail, contrary to predictions of standard logics, whereby demanding reasoning steps are performed at once and without effort. Our framework models the dynamics of inference and the resource consumption each step induces.

Example 1

(Bounded muddy children) Take \(m_a\), \(m_b\) as the atoms for “child a (resp. b) is muddy” and \(n_a, n_b\) for “child a (resp. b) answers no to the first question”. Let \(M= \langle W^P, W^I, {\textit{ord}}, V, R, cp \rangle \) be as depicted in Fig. 1. For simplicity, take two rules, transposition of the implication and modus ponens, so that \(R=\{ {\textit{TR}}, {\textit{MP}}\}\) where \({\textit{TR}} = \{ \lnot m_a \rightarrow \lnot n_b \} \leadsto n_b \rightarrow m_a\), \({\textit{MP}} = \{ n_b, n_b \rightarrow m_a \} \leadsto m_a\), \({\textit{Res}}=\{ time, memory \}\), \(c({\textit{TR}})=(5,2), c({\textit{MP}})=(2,2)\), \( cp =(5,7)\).

Analyzing the reasoning of child a (see Fig. 1) after the father’s announcement and after child b answered “no” to the first question, we verify that \(\Box (\lnot m_a \rightarrow \lnot n_b)\) and \(\Box n_b\) are valid, i.e. child a initially knows that if she is not muddy, then child b should answer “yes” (as in that case only b is muddy), and that b said “no”. Following a TR-application, the world \(w_0\) is eliminated and its position is taken by its \({\textit{TR}}\)-expansion, i.e. \(w_2\) and \( cp ^{{\textit{TR}}} = (5,7)-(5,2) = (0,5)\). In addition, \(A ({\textit{TR}})\), and \( cp \ge c_{{\textit{TR}}}\). Therefore \(\langle {\textit{TR}} \rangle \Box (n_b \rightarrow m_a)\) is also valid. But now the cost of the next step is too high, i.e. \(M^{{\textit{TR}}},w_{1}\) \(\not \models \) \( cp \ge c_{{\textit{MP}}}\) (compare \( cp ^{{\textit{TR}}}\) and \(c({\textit{MP}})\)), so overall the formula \( \langle {\textit{TR}} \rangle \lnot \langle {\textit{MP}} \rangle \Box m_a \) is indeed valid.

Fig. 1
figure 1

The reasoning of boundedly rational child a. Thicker borders are used for deductively closed possible worlds. In impossible worlds, we write all propositional formulas satisfied and indicate (non-trivial) expansions via dashed arrows.

3 Reduction and Axiomatization

Work in Wansing (1990) shows how various models for knowledge and belief, including structures for awareness (Fagin and Halpern 1987), can be viewed as impossible-worlds models (more specifically, Rantala models, Rantala 1982), that validate precisely the same formulas (given a fixed background language). In the remainder, we explore the other direction and show that our impossible-worlds framework can be reduced to an awareness-like one, that only involves possible worlds. In the absence of impossible worlds, standard techniques used in axiomatizing DEL settings (via reduction axioms) can be used. This reduction is a technical contribution; the components of the reduced model lack the intuitive readings of the original framework, but allow us to prove completeness. Further, this method has the advantage of combining the benefits of impossible worlds in modelling non-ideal agents and the technical treatment facilitated by awareness-like DEL structures.

First, we show how the static part of the impossible-worlds setting can be transformed into one that merely involves possible worlds and captures the effect of impossible worlds via the introduction of auxiliary modalities and syntactic, awareness-like functions. Second, we obtain a sound and complete axiomatization for the static part through the construction of a suitable canonical model. Third, we give DEL-style reduction axioms that reduce formulas involving the dynamic rule-operators to formulas that contain no such operator. In this way, we use the completeness of the static part to get a complete axiomatization for the dynamic setting.

3.1 The (Static) Language for the Reduction

We first fix an appropriate language \(\mathcal {L}_{ r }\) as the “common ground” needed to show that the reduction is successful, i.e. that the same formulas are valid under the original and the reduced models. As before, let \(\spadesuit \) stand for K or \(\Box \) and take the quantification set \(Q_\spadesuit (w)\) to be W if \(\spadesuit =K\), and \(Q_\spadesuit (w) := \{u \mid w \ge u \}\), if \(\spadesuit =\Box \) (to denote the set that the truth clauses for K and \(\Box \) quantify over). Auxiliary operators are then introduced to the static fragment of \(\mathcal {L}\), in order to capture (syntactically) the effect of impossible worlds in the interpretations of propositional attitudes. For \(w \in W^P\):

  • \(M,w \models L_{\spadesuit } \phi \text { iff } M, u \models \phi \text { for all } u \in W^P \cap Q_{\spadesuit } (w)\)

  • \(M,w \models I_{\spadesuit } \phi \text { iff } M, u \models \phi \text { for all } u \in W^I \cap Q_{\spadesuit } (w)\)

That is, \(L_\spadesuit \) provides the standard quantification over the possible worlds while \(I_\spadesuit \) isolates the impossible words, for each \(\spadesuit =K, \Box \). In addition, we introduce operators to encode the model’s structure:

  • \(M,w \models \hat{I}_\spadesuit \phi \text { iff } M, u \models \phi \text { for some } u \in W^I \cap Q_\spadesuit (w)\)

  • \(M,w \models \langle {\textit{RAD}} \rangle _{\rho } \phi \) iff for some \(u \in w^{\rho }\): \(M, u \models \phi \)

The operators \(\langle {\textit{RAD}} \rangle _{\rho }\), labelled by inference rules are such to ensure that there is some \(\phi \)-satisfying \(\rho \)-expansion. To express that all \(\rho \)-expansions are \(\phi \)-satisfying, we use \([{\textit{RAD}}]_\rho \phi := \langle {\textit{RAD}} \rangle _\rho \top \rightarrow \langle {\textit{RAD}} \rangle _\rho \phi \), because once an expansion exists, it is unique. Indexed operators of this form provide information on the model’s structure; they are introduced syntactically only as temporal-style projections of connections induced by inference rules on the model. This is why their interpretation should be independent of the distinction between possible and impossible worlds. For example, for \(w \in W^I\): \(M,w \models \langle {\textit{RAD}} \rangle _{\rho } \phi \) iff for some \(u \in w^{\rho }\): \(M, u \models \phi \). We also use the following abbreviation: if \(\phi \) is of the form \(\lnot \psi \), for some formula \(\psi \), then \(\overline{I}_\spadesuit \phi := \hat{I}_\spadesuit \psi \), else \(\overline{I}_\spadesuit \phi := \bot \).

3.2 Building the Reduced Model

Towards interpreting the auxiliary operators in the reduced model, we construct awareness-like functions. Take \(V^+ (w) := \{ \phi \in \mathcal {L}_r \mid M, w \models \phi \}\) for \(w \in W^I\) and:

  • \(\mathrm {I}_\spadesuit : W^P \rightarrow \mathcal {P} (\mathcal {L}_{ r })\) such that \(\mathrm {I}_\spadesuit (w)= \bigcap _{v \in W^I \cap Q_\spadesuit (w)} V^+(v)\). Intuitively, \(\mathrm {I}_\spadesuit \) takes a possible world w and yields the set of those formulas that are true at all impossible worlds in its quantification set.

  • \({\hat{\mathrm {I}}}_{\spadesuit }: W^P \rightarrow \mathcal {P} (\mathcal {L}_{ r })\) such that \({\hat{\mathrm {I}}}_{\spadesuit }(w)= \bigcup _{v \in W^I \cap Q_\spadesuit (w)} V^+(v)\). Intuitively, \({\hat{\mathrm {I}}}_{\spadesuit }\) takes a possible world w and yields the set of those formulas that are true at some impossible world in its quantification set.

The function \({\textit{ord}}\) captures plausibility and the “world-swapping” effect of rule-applications. Since the latter will be treated via reduction axioms, we provide a reduced model equipped with a standard binary plausibility relation (to serve as an awareness-like plausibility structure (ALPS), with respect to which the static logic will be developed). Given the original model \(M = \langle W^P, W^I, {\textit{ord}},\) \(V, R, cp \rangle \), our reduced model is the tuple \({\mathbf {M}}= \langle \mathrm {W}, \ge , \sim , \mathrm {V}, \mathrm {R}, cp , \mathrm {I}_\spadesuit , {\hat{\mathrm {I}}}_\spadesuit \rangle \) where:

$$\begin{aligned} \begin{array}{lr} \mathrm {W} = W^P &{}\qquad \mathrm {V}(w) = V(w)\,\mathrm{for}\,w \in \mathrm {W}\\ u \ge w\;{\text {iff}}\;{\textit{ord}}(u) \ge {\textit{ord}}(w),\,\mathrm{for}\,w,u \in \mathrm {W} &{} \qquad \mathrm {R}(w)= R(w)\,\mathrm{for}\,w \in \mathrm {W}\\ u \sim w\;{\text {iff}}\;u \ge w\,\mathrm{or}\,w \ge u,\, \mathrm{for}\,w,u \in \mathrm {W} &{} \qquad \mathrm {I}_\spadesuit , {\hat{\mathrm {I}}}_\spadesuit \,{\text {as explained before}} \end{array} \end{aligned}$$

The clauses based on the reduced model are such that the auxiliary operators are interpreted via the awareness-like functions. They are presented in detail in the Appendix, along with the proof that the reduction is indeed successful:

Theorem 1

(Reduction) Given a model M, let \(\mathbf {M}\) be its (candidate) reduced model. Then \(\mathbf {M}\) is indeed a reduction of M, i.e. for any \(w \in W^P\) and formula \(\phi \in \mathcal {L}_{ r }\): \(M, w \models \phi \) iff \(\mathbf {M}, w \models \phi \).

3.3 Axiomatization

We have reduced plausibility models to ALPSs. We now develop the (static) logic \(\varLambda \), showing that it is sound and complete with respect to them.

Definition 9

(Axiomatization of \(\varLambda \)) \(\varLambda \) is axiomatized by Table 1 and the rules Modus Ponens, \( Necessitation _K\) (from \(\phi \), infer \(L_K \phi \)) and \( Necessitation _\Box \) (from \(\phi \), infer \(L_\Box \phi \)).

Table 1 The static axioms

\( Ineq \), described in Fagin and Halpern (1994), is introduced to accommodate the linear inequalities.Footnote 9 The \(\mathrm {S5}\) axioms for \(L_K\) and \(\mathrm {S4}\) axioms for \(L_\Box \) mimic the behaviour of K and \(\Box \) in the usual plausibility models: these operators quantify over possible worlds only. The (clusters of) axioms about Soundness of Rules, Minimal Consistency and Succession take care of the respective model conditions (to the extent that these affect our language, given its expressiveness). The same holds for Indefeasibility and Local Connectedness, which also mimic their usual plausibility counterparts. To capture the behaviour of radius, we also introduce the Radius axioms. Finally, \( Red _\spadesuit \) expresses K and \(\Box \) in terms of the corresponding auxiliary operators. We now move to the following theorems; the proofs can be found in the Appendix.

Theorem 2

(Soundness) \(\varLambda \) is sound with respect to ALPSs.

Theorem 3

(Completeness) \(\varLambda \) is complete with respect to non-standardFootnote 10 ALPSs.

Given the static logic, it suffices to reduce formulas involving \(\langle \rho \rangle \) in order to get a dynamic axiomatization. It is useful to abbreviate updated terms in our language as follows: \( cp ^\rho := cp - c_\rho \) and \(c_\rho ^\rho := c_\rho \).

Theorem 4

(Reducing \(\langle \rho \rangle \)) The following are valid in the class of our models:

$$\begin{aligned} \begin{array}{c} \langle \rho \rangle (z_1 s_1 + \cdots + z_n s_n \ge \mathrm {c}) \leftrightarrow ( cp \ge c_\rho ) \wedge A \rho \wedge (z_1 s^\rho _1 + \cdots + z_n s^\rho _n \ge \mathrm {c})\\ \langle \rho \rangle \langle {\textit{RAD}} \rangle _{\rho } \phi \leftrightarrow ( cp \ge c_\rho ) \wedge A \rho \wedge \langle \rho \rangle \phi \\ \end{array} \end{aligned}$$

Theorem 5

(Dynamic axiomatization) The axiomatic system given by Definition 9 and the reduction axioms of Theorem 4 is sound and complete with respect to non-standard ALPSs.

4 Bounded Inference and the Dynamics of interaction

In the previous sections, we have focused on how deductive reasoning, and the bounds imposed on it by cognitive fatigue, affect the agent’s epistemic state. As observed in van Benthem (2008b), apart from “internal elucidation”, external actions such as public announcements (Baltag et al. 1998; Plaza 2007) can also enhance the agent’s epistemic state. The mixed tasks involved in bounded reasoning and in revising epistemic and doxastic states (also discussed in Wassermann 1999) require an account of both sorts of actions and of the ways they are intertwined. The various policies of dynamic change triggered by interaction (public announcement, radical or conservative upgrades, etc., Baltag and Smets 2008; van Benthem 2007) fit in our framework, provided that suitable dynamic operators and model transformations are defined.

4.1 Public Announcements

To supplement the account of a boundedly rational agent who reasons deductively in order to come to know more, we first introduce public announcements. These public communication actions can facilitate the agent’s knowledge gain, in this case not because of her own reasoning, but because information was provided to her. For now we assume that the incoming external information was provided to the agent for free (i.e. no cognitive costs are assigned). One common assumption is that such announcements are always truthful and completely trustworthy; the announced sentence is therefore always true and a rational agent always adopts its content.

Extending the syntax We introduce operators of the form \([\psi !]\) to \(\mathcal {L}\), where [\(\psi !] \phi \) stands for “after announcing \(\psi \), \(\phi \) is true”. We focus on cases where \(\psi \) is a propositional formula.Footnote 11 Let the language extended with public announcements be called \(\mathcal {L}_{ PA }\).

Extending the semantics \(\mathcal {L}_{ PA }\) is interpreted in the plausibility models of Definition 4. The new clause concerns public announcements. Semantically, the formula \([\psi !] \phi \) is taken to be true in case: whenever \(\psi \) is true, \(\phi \) is true after we eliminate all non-\(\psi \) possibilities from W. This is because the public announcement of \(\psi \) is completely trustworthy, so non-\(\psi \) worlds, possible or impossible, are not entertained by the agent any more.Footnote 12 The other components of the model, namely \({\textit{ord}}, V, R\), are restricted accordingly, while \( cp \) does not change as we view the announcement as provided to the agent externally without any effort on her side. More formally, the update induced by a public announcement is:

Definition 10

(Model transformation by public announcement) Given plausibility model \(M=\langle W^P, W^I, {\textit{ord}}, V, R, cp \rangle \), its transformation by the public announcement of \(\psi \in \mathcal {L}_p\) is the model \(M^{\psi !}\) given by:

$$\begin{aligned} \begin{array}{l|l} (W^P)^{\psi !} = \{ w \in W \mid M, w \models \psi \} \quad &{}\quad (W^I)^{\psi !} = \{ w \in W \mid M, w \models \psi \} \\ {\textit{ord}}^{\psi !} = {\textit{ord}}|_{W^{\psi !}} \quad &{} \quad V^{\psi !}= V|_{W^{\psi !}} \\ R^{\psi !}= R|_{W^{\psi !}} \quad &{} \quad cp ^{\psi !}= cp \end{array} \end{aligned}$$

The conditions of the class \(\mathbf {M}\) are preserved by this definition. The properties of the ordering induced by \({\textit{ord}}\) are guaranteed, just as the public announcement updates preserve the conversely well-founded relation \(\ge \) of the usual plausibility models. Minimal Consistency and Soundness of Rules still hold, because the worlds surviving the announcement still adhere to these restrictions. Succession is preserved because of the way \(W^{\psi !}\) and \(V^{\psi !}\) are defined; if the conditions of Succession are met in the updated model, the successor world satisfies \(\psi \) and is therefore included in \(W^{\psi !}\).

We give the truth clause for public announcements, which follows the standard DEL fashion, only now adapted to the impossible-worlds model we devised to deal with deductive reasoning. In particular, for \(w \in W\):

$$\begin{aligned} M, w \models [\psi !] \phi \;\text {iff}\;M, w \models \psi \;\text {implies}\;M^{\psi !}, w \models \phi \end{aligned}$$

Notice that the formula \([\psi !] \phi \) is vacuously true if \(\psi \) is not true. The same clause applies to both possible and impossible worlds. This is because of the intuitive interpretation of public announcements. The only worlds surviving the public announcement of \(\psi \) are the ones satisfying \(\psi \), possible or not, because arguably any non-\(\psi \) world will be dropped as a possibility.

Under this extended setting, we can bring together external information and the agent’s internal reasoning processes. For instance, suppose that the agent knows \(\phi \rightarrow \psi \) and has \({\textit{MP}}\) available as a rule, and then she comes to know that \(\phi \) from an external source. She may then apply the rule (if affordable) and finally come to know \(\psi \). It is therefore the combination of interaction and internal deductive reasoning that allowed her to know \(\psi \). To illustrate the workings of such combinations, we come back to our bounded version of the Muddy Children Puzzle and explicitly account for the interaction taking place.

Example 2

(Bounded muddy children and public announcements) In this example, we incorporate the public announcement of child b saying no to the question of the father into child a’s reasoning process. In particular, a before the announcement of b cannot tell if she is muddy or not, nor can she figure it out using deductive reasoning alone because her reasoning process depends on the announcement of b. We further suppose that the child initially considers it more plausible to be clean. The development of the scenario is presented in Figs. 23, and 4.

Fig. 2
figure 2

The initial model for child a, before the announcement of child b. As before, thicker borders are used for deductively closed possible worlds. In impossible worlds, we write all propositional formulas satisfied and indicate (non-trivial) expansions via dashed arrows. Notice that the plausibility model continues from \(w_3\) to \(w_2\).

Fig. 3
figure 3

The model for child a after child b’s announcement.

Fig. 4
figure 4

The final model for child a after she performs the \({\textit{TR}}\) inference (as in Example 1), using the information provided by child b.

Effortful announcements We have so far assumed that public announcements are cost-free. However, it can be that adopting a piece of external information requires effort. van Benthem (2008a) and van Benthem (2008b), it is proposed that there are two different kinds of such informational events, presented as “implicit” and “explicit” observations. In our terms, there can be effortless announcements (like the ones defined before) and effortful announcements.Footnote 13 The latter are just like those in Definition 10, but they also incur a cost of accepting the announced information. This presupposes that costs of announcements are also fixed, next to the costs of rules. More specifically, the cost-assigning function c should be extended as follows: \(c: \mathcal {L}_R \cup \mathcal {L}_p \rightarrow \mathbb {N}^r\). A simplifying assumption is that announcements of propositional facts always incur the same cost, regardless of the logical structure of the announced sentence. It is nonetheless plausible that the cost of an “explicit” announcement is related to logical complexity, and research on the cognitive difficulty of boolean concepts (Feldman 2000, 2003) might assist in determining these costs. For now we abstain from imposing strict restrictions on c as the details require empirical evidence and a systematic generalization of it; this is left for further work. In any case, the definition of an effortful announcement is:

Definition 11

(Model transformation by an effortful public announcement) Take plausibility model \(M=\langle W^P, W^I, {\textit{ord}}, V, R, cp \rangle \). Its transformation by the effortful public announcement of \(\psi \in \mathcal {L}_p\) is the model \(M^{\psi !}\) given by:

$$\begin{aligned} \begin{array}{l|l} (W^P)^{\psi !} = \{ w \in W \mid M, w \models \psi \} \quad &{} \quad (W^I)^{\psi !} = \{ w \in W \mid M, w \models \psi \} \\ {\textit{ord}}^{\psi !} = {\textit{ord}}|_{W^{\psi !}} \quad &{} \quad V^{\psi !}= V|_{W^{\psi !}} \\ R^{\psi !}= R|_{W^{\psi !}} \quad &{} \quad cp ^{\psi !}= cp - c(\psi ) \end{array} \end{aligned}$$

The truth clause of an effortful announcement much resembles that of rule-applications. Given that we have terms of the form \(c_\psi \) to express the cost of \(\psi \): \(M, w \models [\psi !] \phi \text { iff } M, w \models \psi \text { implies } (M,w \models cp \ge c_\psi \text { and } M^{\psi !}, w \models \phi )\)

Reduction In Sect. 3, we introduced a method to extract a sound and complete axiomatization for our basic framework. This also involved giving reduction axioms for applications of rules. The axiomatization of the Logic of Public Announcement (PAL) (without common knowledge) (Baltag et al. 1998; Baltag and Renne 2016; Plaza 2007) usually involves reduction axioms that allow for replacing formulas with public announcements with—eventually—formulas of the static language. Completeness then follows from the respective complete static base logics. However, the standard reduction axioms (Baltag and Renne 2016; van Benthem 2007; van Ditmarsch et al. 2007) would not work for our purposes. Notice that \([\psi !] \spadesuit \phi \leftrightarrow (\psi \rightarrow \spadesuit [\psi !] \phi )\), where \(\spadesuit =K, \Box \), is valid due to our clause for \([\psi !]\phi \). This maintains its intuitive interpretation, also at impossible worlds. Despite this validity, a replacement of \([\psi !]\phi \) in accord with the other reduction axioms would not necessarily go through. Both the truth clauses for K and \(\Box \) range over impossible worlds too, to avoid closure under logical equivalence.

In order to reduce formulas with public announcements, we have to follow a procedure similar to the one adopted for rule-applications. That is, we need an auxiliary static operator encoding that \([\psi !] \phi \) is not evaluated arbitrarily when under the scope of an operator that quantifies over \(W^I\), instead following the regular public announcement clause.Footnote 14

The addition of a special static operator acting as implication at \(w \in W^I\) is necessary for the reduction of this extended setting. The need for a more expressive language is justified in light of the intuitive readings of K and \(\Box \), and their interpretations in Definition 8. Asking that \(K \phi \) and \(\Box \phi \) are true iff \(\phi \) is true throughout the suitable set of possible and impossible worlds captures the fallibility of the agent and breaks the forms of logical omniscience. The effect of a truthful public announcement in the agent’s epistemic state involves the external information (hence the deletion of worlds), but the prefixed formula is evaluated in the resulting model, which still encodes the limitations of our agent. This is why reducing announcements deviates from the procedure of successive replacements based on the standard reduction axioms. For example, [p!]Kp (where p is an atom) is valid because after deleting the non-p worlds, Kp becomes true. This is equivalent to \(p \rightarrow K[p!]p\) but not to \(p \rightarrow K (p \rightarrow p)\): the agent does not necessarily know \(p \rightarrow p\). According to the rationale of our framework, a fallible but bounded agent might have to reason to reach \(p \rightarrow p\) too; this piece of knowledge should not be taken for granted.

4.2 Other Policies of Integrating External Information

Public announcements are not the only operations for integrating external information. Plausibility models allow us to encode more nuanced notions of knowledge and belief, thus more nuanced policies of integrating external information. For example, the agent might get information coming from a reliable, but not absolutely trustworthy source. This “soft” information, contrary to the “hard” information of a public announcement, triggers a re-arrangement of plausibility, and not an elimination of worlds. Examples of such operations include radical (or lexicographic) upgrades and conservative upgrades (Baltag and Renne 2016; Baltag and Smets 2008; van Benthem 2007, 2008b, 2011; Rott 1989). A radical upgrade with \(\psi \) changes the plausibility as follows: \(\psi \)-worlds are ranked over the non-\(\psi \) worlds but the ranking of worlds within the two zones remains intact. Regarding conservative upgrades: the most plausible of the \(\psi \)-worlds are ranked over all other worlds and the rest remain unchanged.Footnote 15 In what follows, we spell out how radical upgrades can be incorporated in our framework, and we notice that more conservative policies can be dealt with along similar lines.

Extending the syntax We further expand the language \(\mathcal {L}_{ PA }\), using operators of the form \([\psi \Uparrow ]\), where \(\psi \in \mathcal {L}_p\), to denote radical upgrades with \(\psi \). More specifically, \([\psi \Uparrow ]\phi \) reads “after the radical upgrade with \(\psi \), \(\phi \) is true”.

Extending the semantics We hereby present the model transformation by a radical upgrade—again, assuming it amounts to an effortless process.Footnote 16 As an auxiliary step take: \(\ge ^{\psi \Uparrow } = (\ge \cap (W \times [[\psi ]])) \cup (\ge \cap ( \overline{[[\psi ]]} \times W )) \cup ( \sim \cap (\overline{[[\psi ]]} \times [[\psi ]]))\), where \(\overline{[[\psi ]]}\) denotes the set of worlds where \(\psi \) is not satisfied. Then:

Definition 12

(Model transformation by a radical upgrade) Take plausibility model \(M=\langle W^P, W^I, {\textit{ord}}, V, R, cp \rangle \). Its transformation by a radical upgrade with \(\psi \in \mathcal {L}_p\) is the model \(M^{\psi \Uparrow }=\langle W^P, W^I, {\textit{ord}}^{\psi \Uparrow }, V, R, cp \rangle \), where \({\textit{ord}}^{\psi \Uparrow }\) is any function from the set \(\{f: W \rightarrow \varOmega \mid \text { for any } w, u \in W: f(w) \ge f(u)\) iff \(w \ge ^{\psi \Uparrow } u\}\).

Notice that any ordinal-assigning function that preserves the ordering of \(\ge ^{\psi \Uparrow }\) works. This is because we are solely interested in the upgrade having its usual qualitative effect: prioritizing \(\psi \)-worlds over non-\(\psi \) ones. The properties of our models are clearly preserved. Then for \(w \in W\), the truth clause for \([\psi \Uparrow ] \phi \) is given by: \(M,w \models [\psi \Uparrow ] \phi \) iff \(M^{\psi \Uparrow }, w \models \phi \).

Therefore, our plausibility models also facilitate the study of more nuanced attitudes and softer update policies. As an example, we consider an alternative version of the Muddy Children Puzzle found in Baltag and Smets (2009). It treats the incoming information not as “hard” information, but as “soft” information (the sources are considered reliable but not absolutely trustworthy).Footnote 17

Example 3

(Bounded muddy children and radical upgrades) We now approach the aforementioned scenario of Example 2, taking child b as a reliable, but not infallible, source of information. Therefore, the incoming information that \(n_b\) is treated as an upgrade, that alters the plausibility ordering, and not as a public announcement, that deletes non-\(n_b\) worlds altogether (Fig. 5).

Fig. 5
figure 5

The model of Fig. 2 after the upgrade with \(n_b\). Clearly, \( \Box n_b\) is satisfied at the actual world (\(w_4\)), unlike \(K n_b\). Provided that \({\textit{TR}}\) and \({\textit{MP}}_1\) are affordable and available, \(\langle {\textit{TR}} \rangle \langle {\textit{MP}}_1 \rangle \Box m_a\) is also satisfied, unlike \(\langle {\textit{TR}} \rangle \langle {\textit{MP}}_1 \rangle K m_a\).

5 Conclusions and Further Research

By combining DEL and an impossible-wolds semantics, we modelled fallible but boundedly rational agents who can in principle eliminate their ignorance as long as the task lies within cognitively allowed applications of inference rules. We discussed how this framework accommodates epistemic scenarios realistically and how it fits in the landscape of similar attempts put against logical omniscience. It was shown that this combination can be reduced to a syntactic, possible-worlds structure that allows for useful formal results. We finally furnished this framework with actions for external information to better account for the fine and mixed nature of reasoning processes.

Note that while factivity of knowledge is indeed warranted by the reflexivity of our models, the correspondence between other properties (such as transitivity) and forms of introspection is disrupted by the impossible worlds. Avoiding unlimited introspection falls within our wider project to model non-ideal agents. Just as with factual reasoning though, we propose a principle of moderation, achieved via the introduction of effortful introspective rules whose semantic effect is similarly projected on the structure of the model. Furthermore, it is precisely along these lines that a multi-agent extension of this setting can be pursued.

Moreover, it is interesting to search for alternatives to the use of special operators in providing reduction axioms for rule-applications and announcements. This might be especially useful for multi-agent frameworks. In particular, there are other tools from DEL that allow uniform treatment of (communicative) actions, such as action models (Baltag et al. 1998). Given ongoing work, we believe that action models with postconditions (van Benthem et al. 2006), along with the set-expressions used in Velázquez-Quesada (2011, Chapter 5) to embed these into awareness frameworks, could help in obtaining simpler reduction axioms for both rule applications and communicative actions.

Apart from extending the logical machinery in order to capture richer reasoning processes, another natural development is towards fine-tuning elements of the model hitherto discussed, in order to better align it with the experimental findings in the literature on rule-based human reasoning. We have already indicated that the function c, which is responsible for the assignment of cognitive costs, should be sensitive to both the rule-schemas in question and the complexity of their particular instances. The well-ordering of inferences that Cherniak (1986) suggests, is supported by the literature we referred to so far, but, at this stage, the evidence fits a qualitative ordering of schemas while a precise quantitative assignment calls for more empirical input. Specifying the intuitive assumption that the more complex an instance, the more cognitively costly it is, breaks into two tasks (i) choosing some appropriate measure of logical complexity: number of literals, (different) atoms, connectives, etc., (ii) using experimental data to fix coefficients that associate the measure with the performance of agents (with respect to our selected resources). Such a procedure will be pursued in a future paper and it can illuminate whether there are classes of inferences, sharing properties in terms of our measure, that should be assigned equal cognitive costs, as one might intuitively expect.