1 Introduction

When engaging in a debate, we do not only exchange arguments, we also reason about information available to others, and both things play crucial roles. On the one hand, acquiring and communicating new arguments can shift one’s point of view on the issue of the debate, or make it more robust. On the other hand, beliefs about someone else’s background information determine which arguments one is willing to put on the table and in which order, like in a game of incomplete information. To understand how argumentation unfolds in real-life debates we need to reason, at least, about goals, beliefs, and information change. The latter involves communication moves of the speaker (sender)—choosing and disclosing certain piece of information—and information updates by the hearer (receiver)—incorporating that piece into her knowledge base.Footnote 1 Our running example illustrates how strongly these elements interact with each other.

Example 1

Charlie wants to convince his mother that he has right to have a chocolate candy (a). Mom rebuts that too much chocolate is not good for his teeth (b). Charlie may counterargue that he didn’t have chocolate since yesterday (d). Unfortunately for him, Mom has seen him grabbing chocolate from the pantry just a few hours ago (e)—by the way, she wrongly thinks that Charlie noticed this. Alternatively, Charlie may quote scientific evidence from a journal paper on Pscience that eating chocolate is never too much (c). Mom does not know that this paper has been retracted (f) and, in principle, this would be a safe move for Charlie.Footnote 2

Charlie’s goal is to make a justified in the eyes of his mother. To achieve this goal he needs to rebut b. He has several options to do so: he may put forward d, or c or both, i.e. he has to select a communication move. To choose his strategy, he needs clues on Mom’s background information, i.e. to form beliefs about her beliefs. Finally, success also depends on Mom’s attitude towards the information she receives, i.e. her updating policy.

Logical languages and semantics provide a powerful tool to reason about these aspects of argumentation.Footnote 3 Here we aim to show that dynamic epistemic logic (DEL) can serve as a general framework to deal with many conceptual aspects of argumentation which are of interest in general argumentation theory and its more recent developments in AI and computer science, specifically in the study of computational models of argument (see Sect. 8).

We can see the language of DEL as structured in three layers. The first layer consists of the propositional language. The one we adopt enables to encode the state of a multi-agent debate, which semantically constitutes a propositional valuation. Using tools from abstract argumentation (Dung 1995), such states are modelled here as multi-agent argumentation frameworks (MAF). They include (a) the description of a universal argumentation framework consisting of all the arguments (and conflicts among them) that are potentially available to the agents, and (b) the specific information of each agent, i.e. the part of the universal framework each agent is aware of. Languages of propositional logic are widely used to encode argumentation frameworks, see Besnard et al. (2020) for a survey. In many cases such encodings employ minimal resources as they are designed with efficiency in mind, e.g. to reduce computational problems in abstract argumentation to SAT-solving problems (Cerutti et al. 2013). The language and semantics we adopt are not tailored for computational purposes and are rather rich instead. On the other hand, they allow us to encode fine-grained argumentative notions such as the agents’ subjective justification status of specific arguments, which, as we will see, is needed to talk about their goals.

The modal part of the language constitutes the second layer and includes epistemic (resp. doxastic) operators for knowledge (resp. belief). With these operators it is possible to express individual attitudes at any level of nesting, such as the second level attitude ‘Charlie believes that Mom believes that argument a is justified for Charlie’. At this stage, the language is interpreted in standard Kripke-style semantics where states are MAFs. The plurality of states serves to capture the uncertainty of agents about the actual state of the debate. As mentioned, modelling uncertainty is relevant to analyze the strategic aspects of argumentation. Recent approaches in formal argumentation model uncertainty by means of incomplete argumentation frameworks (Baumeister et al. 2018a, b), control argumentation frameworks (Dimopoulos et al. 2018), and opponent models (Oren and Norman 2009; Rienstra et al. 2013; Hadjinikolis et al. 2013; Thimm 2014; Black et al. 2017). These approaches provide efficient solutions for computational and application purposes, such as building automated persuasion systems (Hunter 2018). Our goal here is mainly one of conceptual analysis, for which we seek to achieve generality. Indeed, we show in Sect. 8 that it is possible to translate the central notions of these approaches by means of our language and semantics. Moreover, having the expressive power for talking about epistemic attitudes at any level, our language is able to frame agents’ goals of complex kinds. In our running example, Charlie’s goal amounts to inducing Mom to believe that a is justified, i.e. a first-level attitude. However, we shall see in Sect. 7 that goals and strategies for action may entail more articulated nestings. Furthermore, although we frame our main examples in contexts of strategic and persuasive argumentation, this framework is not conceptually limited to such contexts. Other uses of argumentation entail different kinds of goals but, inasfar as they can be phrased in terms of individual or collective beliefs, the DEL approach is useful there too. This holds, for example, for collective inquiry, where the aim is to reach common knowledge or shared belief.Footnote 4

The third layer of the language includes dynamic modalities to reason about the effect of argumentative actions (e.g. communicating an argument) and different belief updates by the agents. Here again, while dynamics of argument communication is the focus of a well-established tradition in abstract argumentation (see Doutre and Mailly 2018 for a survey), belief updates are mostly confined to the tradition going from AGM belief revision (Alchourrón et al. 1985) to DEL (van Ditmarsch et al. 2007; van Benthem 2011). To the best of our knowledge, there is no unified logical framework for treating both aspectsFootnote 5 A general framework for reasoning about argumentative and epistemic actions becomes relevant insofar agents are liable to revise their knowledge base in different ways, as it is the case for Mom in our running example. For this purpose, we use a rather expressive language, the one of DEL with factual change (van Ditmarsch et al. 2005; van Benthem et al. 2006) which comes at the price of a blow in computational complexity.Footnote 6

The rest of this paper is organized as follows. In Sect. 2 we illustrate the background and the general motivations for our work. Section 3 presents the preliminary tools from abstract argumentation and introduces the notion of MAF. There are indeed several alternative ways to represent a multi-agent scenario of debate. Here we take a specific option and leave critical discussion of other possibilities to Sect. 9. In Sect. 4 we introduce a propositional language to encode MAFs and prove soundness for this encoding in Proposition 1. In Sect. 5, we develop the epistemic fragment for reasoning about knowledge and belief in abstract argumentation. We introduce the general semantics of epistemic argumentative models (Definition 8). After this, we isolate specific subclasses of models that capture a number of constraints on the awareness of arguments and attacks, as well as on epistemic accessibility. Then we provide axiomatistions for these subclasses and show their soundness and completeness in Theorem 1. In Sect. 6 we introduce the full language of DEL for argumentative models. Semantics are given in terms of event models and product updates as in Baltag and Moss (2004). Here we show how to model basic communication moves and information updates under full trust. We then provide completeness results via reduction axioms (Theorem 2). In Sect. 7, we exploit event models to encode the effects of more subtle policies of communication and information update under mixed trust. In Sect. 8 we show how this framework relates to other formalisms developed in the area of computational argumentation. We conclude with Sect. 9, by discussing conceptual alternatives to our modelization as well as open problems and future work. Given the length of the proofs of most of our results, and the substantial amount of tools they involve, we leave them for the final “Appendix”, where we also prove additional results for an extended modal language.

2 Historical background and general motivations

By bringing together two different formal traditions such as epistemic modal logic and abstract argumentation, we aim not only to provide results of interest for both, but also to show that their respective toolboxes provide powerful conceptual resources to think both traditions in a different light. At least since Aristotle, logic and the study of argumentation ran along separated lines, the latter being the exclusive competence of rhetoric. This separation contributed to crystalize the notion of deductive inference from classical logic as the golden standard of correct reasoning. Classical inference is non-defeasible and typically abstracts away from the dialogical/adversarial dimension in which real-life argumentation takes place. From the philosophical side, major criticisms of this paradigm came in the twentieth century from the works of Toulmin (2003), Perelman and Olbrechts-Tyteca (1958), and Hamblin (1970). Yet, formal research was still mastered by traditional approaches, at least until the new-born field of artificial intelligence undertook modelling human-like reasoning, and eventually converged in the definition of systems of non-monotonic logics (Reiter 1980) and defeasible reasoning (Pollock 1987, 1991). A turning point has been the introduction of abstract argumentation by Dung (1995). Here the main tool are argumentation frameworks, i.e. directed graphs which represent debates at an abstract level, where arguments are nodes and attacks from one argument to another—e.g. undercuts or rebuttals—are directed edges. The key semantic notion in abstract argumentation is that of a solution, i.e. a set of arguments that constitutes an acceptable opinion as the outcome of a debate. It turns out that the most relevant semantics for non-monotonic and defeasible reasoning can be expressed in terms of solution concepts for argumentation frameworks (Dung 1995), which thus provide a powerful mathematics for defeasible reasoning in dialogical scenarios. Abstract argumentation can be seen as a very general theory of conflict that, in the words of Dung, captures the fact that

the way humans argue is based on a very simple principle which is summarized succinctly by an old saying: “The one who has the last word laughs best” (Dung 1995, p. 322).

For our purposes, argumentation frameworks are a first adequate building block to model scenarios like Example 1, where solution concepts provide the essentials for defining agents’ (defeasible) justification of an argument and their goals.

From the beginning of the 1980s—in the wake of the “dynamic turn” pushed by the introduction of propositional dynamic logic (Fischer and Ladner 1979)—logicians have dedicated increasing interest to information change, the study of how information states transform under new data. The early approach that dominated the field was AGM belief revision (Alchourrón et al. 1985), later joined by DEL (Plaza 1989; Gerbrandy and Groeneveld 1997; Baltag et al. 2016). Dynamic epistemic logics, endowed with plausibility models and operators of conditional belief, allow a systematic treatment of AGM-style belief revision and can model a wide range of information updates (van Benthem 2007; Baltag and Smets 2008). A dominant part of the work in both areas has been shaped by a normative approach to the study of information change. AGM belief revision typically focuses on postulates encoding the properties that an update operation should satisfy to be considered rational. Although DEL has the flexibility to model a wide range of epistemic transformations, including the effects of lying and deception (Baltag and Moss 2004; van Ditmarsch et al. 2007), it is fair to say that the mainstream focus has been the update of information under new evidence, where the latter is intended as truthful information made available to the agent. The typical belief upgrades studied in DEL applied to belief revision—such as public announcement !P, lexicographic upgrade \(\Uparrow P\) and minimal upgrade \(\uparrow P\)—implicitly assume that the source of information is trusted as infallible (public announcement) or at least believed to be trustworthy (minimal upgrade) (Rodenhäuser 2014). However, most situations of real-life information exchange among individuals are of mixed trust: the source of information is taken to be trustworthy to a limited, or at least context-dependent, extent: we may trust Professor Bertrand Russell on logic matters, probably less so when he predicts the outcome of the next horse race. With the exception of Rodenhäuser (2014), mixed trust of this and other kinds deserved limited attention in DEL. We will handle situations of mixed trust with our formal machinery in Sect. 7.

From a normative perspective, many interesting real-life mechanisms of information update are deemed “descriptive” and left to psychologists, when not discarded as reasoning flaws of an imperfect reasoner. This holds for confirmation bias (Wason 1960), more adequately called myside bias (Perkins et al. 1986)—that is the tendency to strictly evaluate information disconfirming our prior opinions and, vice versa, loosely filter and search for confirming evidence—and for the operation by which we reduce cognitive dissonance upon receiving information which is inconsistent with our prior beliefs (Festinger 1957). Scholars in logic can hardly be blamed for this attitude, since it is supported by most psychology of reasoning, as the extensive debate on, e.g., the Wason selection task witnesses (Wason 1966). More recently, Mercier and Sperber’s argumentative theory of reasoning advances a different view, according to which these purported flaws are rather features of reasoning, having an evolutionary explanation in the social context of human communication (Mercier and Sperber 2011, 2017). The argumentative theory of reasoning is a naturalized approach that sees reasoning as a specific cognitive module which “evolved for the production and evaluation of arguments in communication” (Mercier and Sperber 2011, p. 58) rather than to perform sound logical and probabilistic inferences, or to enhance individual cognition. Seen from this angle, the myside bias serves the goal of convincing others and keeping epistemic vigilance. Indeed, what we often blame as a bad attitude in everyday confrontations is a common—and mostly healthy—practice in scientific debate over new theories and explanations (Kelly 2008). In general, an argumentation-based approach to reasoning and communication can explain collective dynamics like groupthink and opinion polarization. When individuals with similar opinions on a given issue discuss, they tend to mutually reinforce their views by providing each other novel and persuasive arguments towards the same direction.Footnote 7 A further step in this direction is to investigate the triggering effect of more subtle mechanisms of information update, akin to the myside bias. Sect. 7 shows that DEL can be used for this purpose. Indeed, the notion we characterize as sceptic update provides one possible way of understanding biased assimilation of new arguments. Before getting there, a careful logical construction is needed though, that we begin in the next section.

3 Multi-agent argumentation frameworks

The fundamental notion we employ is that of an argumentation framework, which is no more and no less than a directed graph.

Definition 1

An argumentation framework (AF) is a pair \({\textsf {F}}=(A,R)\) where \(A\ne \emptyset \) is a set of arguments and \(R\subseteq A\times A\) is called the attack relation. We adopt the infix notation \(aRb\) to abbreviate \((a,b)\in R\). Given a set of arguments \(B\subseteq A\), we denote by \(B^{+}\) the set of arguments attacked by B, that is \(B^{+}:=\{a \in A\mid \exists b\in B{:}\,b Ra\}\).

An AF represents a full debate seen from a third-person point of view, where all potential arguments and attacks are on the table. Clearly, at a given moment of a debate, each participant is aware of a specific subset of arguments and attacks, i.e. her subjective information about the debate. This calls for the definition of multi-agent AF. A number of alternative options is available in the literature, and many others are there. Each choice depends on specific assumptions about the common ground of the debate and the awareness constraints on the agents’ information. In our approach we assume the following:

  1. (a)

    the set of arguments that are potentially available to agents is finite;

  2. (b)

    it is fixed in advance;

  3. (c)

    there is an objective matter of fact; independendent from subjective views, by which an argument attacks another;

  4. (d)

    agents can only be aware of arguments in set (a), i.e. there are no non-existing or virtual arguments (cf. Schwarzentruber et al. 2012; Rienstra et al. 2013);

  5. (e)

    agents can be aware of an attack between a and b only if they are aware of both a and b;

  6. (f)

    if an agent is aware of an attack then this attack holds;

  7. (g)

    if an objective attack holds between two arguments and some agent is aware of both, then she is also aware of the attack.

Together, (f) and (g) imply that agents have a (locally) sound and complete awareness of attacks (\(\textsf {SCAA}\)). In general, each of these choices has alternatives, and this gives a very large combinatorics of possibilities for design, which we critically discuss in Sect. 9. It may seem at first sight that constraints (a)–(g) impose strict limitations on the agent’ uncertainty, but we shall see (Sect. 5) that this is not quite so, since the modal component of our framework allows to recapture all sorts of uncertainty. Based on our assumptions we define a multi-agent argumentation framework as follows:

Definition 2

(Multi-agent argumentation framework) A multi-agent argumentation framework (MAF) for a non-empty and finite set of agents \(\textsf {Ag}\) is a 4-tuple \((A,R, \{A_i\}_{i \in \textsf {Ag}},\{E_1,...,E_n\})\) such that \((A,R)\) is a finite AF (the universal argumentation framework, UAF), \(A_i \subseteq A\) is the set of arguments agent i is currently aware of, and \(\{E_1,\ldots ,E_n\}\) is a specific enumeration of the subsets of \(A\), which we assume as fixed from now on. Given a MAF and an agent \(i\in \textsf {Ag}\), agent i’s partial information is defined as \((A_i,R_i)\) where \(R_i := R\cap (A_i \times A_i)\).

Having \(A\) and R finite and fixed captures the constraints from (a) to (c). Constraint (d) amounts to \(A_i \subseteq A\). Finally, the definition of \(R_{i}\) subsumes (e)–(g). The enumeration \(\{E_1,\ldots ,E_n \}\) of \(\wp (A)\) is an important device for encoding, the use of which will be clarified in Sect. 4. Figure 1 provides a pictorial representation of a two-agent MAF describing Example 1.

Fig. 1
figure 1

A MAF for Example 1 (Charlie and Mom). The universal argumentation framework consists of nodes af and the corresponding attacks, as described in the Example. Agent 1 (Charlie) is aware of the entire universal argumentation framework (area in blue) while agent 2 (Mom) is only aware of \(\{a,b,e\}\) and the attacks between them (red ellipses). We omit the representation of an enumeration of \(\wp (A)\)

Solution concepts from abstract argumentation are a key to subjective justification and goals. A solution is a set of arguments that meets intuitive constraints to constitute an acceptable point of view.Footnote 8 Several solution concepts have been introduced by Dung (1995) and subsequent work in abstract argumentation, see Baroni et al. (2018) for an extensive state-of-the-art. For the sake of presentation, we focus on preferred solutions, but our approach can be straightforwardly extended to other admissibility-based semantics (i.e., grounded, complete and stable).Footnote 9

Definition 3

(Defence and preferred solutions) Given an AF \({\textsf {F}}=(A,R)\), a set of arguments \(B\subseteq A\), and an argument \(a \in A\): B defends a iff for every \(c \in A\): if \(c Ra\) then \(c \in B^{+}\). Moreover, B is said to be a complete solution iff (1) it is conflict-free, i.e. \(B\cap B^{+}=\emptyset \) and (2) it contains precisely the arguments that it defends, i.e. \(b \in B\) iff B defends b. B is a preferred solution iff it is a maximal (w.r.t. set inclusion) complete solution. Given an AF \({\textsf {F}}=(A,R)\) we denote by \(\textsf {Pr}({\textsf {F}})\) the set of all its preferred solutions.

In the UAF of Fig. 1, the only preferred solution is \(\{b,e,f\}\). This also corresponds to agent’s 1 preferred solution, as his awareness set \(A_1\) coincides with the entire framework. When we relativize to agent’s 2 awareness set \(A_2\), we obtain instead \(\{b,e\}\) as the unique preferred solution. An AF may have more than one preferred solution. Plurality of solutions allows to define—following Wu and Caminada (2010)—the fine-grained justification status of an argument relative to an AF. The latter is key to express graded notions of acceptability (Beirlaen et al. 2018; Baroni et al. 2019) for reasoning about agents’ goals and the degree of their opinion about the debated issue.Footnote 10

We follow the extension-based characterization of this notion provided by Baroni et al. (2018).Footnote 11

Definition 4

(Fine-grained justification status) Given an AF \({\textsf {F}}=(A,R)\) and an argument \(a\in A\), then a is said to be:

  • strongly (or sceptically) accepted iff \(\forall E \in \textsf {Pr}({\textsf {F}})\, a \in E\);

  • weakly accepted iff (\(\exists E \in \textsf {Pr}({\textsf {F}}){:}\, a \in E\), \(\exists E \in \textsf {Pr}({\textsf {F}}){:}\, a \notin E\), and \(\forall E \in \textsf {Pr}({\textsf {F}})\, a\notin E^{+}\));

  • weakly rejected iff (\(\exists E \in \textsf {Pr}({\textsf {F}}){:}\, a\in E^{+}\), \(\exists E \in \textsf {Pr}({\textsf {F}}){:}\, a\notin E^{+}\), and \(\forall E \in \textsf {Pr}({\textsf {F}})\, a\notin E\));

  • strongly rejected iff \(\forall E \in \textsf {Pr}({\textsf {F}})\,a \in E^{+}\); and

  • borderline otherwise.Footnote 12

Note that the justification status of an argument is always relative to an AF \({\textsf {F}}=(A,R)\), but we omit an explicit reference to \({\textsf {F}}\) when the context is clear enough. Again, the notion can be straightforwardly relativised to agents. For instance, given \(\textsf {MAF}=(A,R, \{A_i\}_{i \in \textsf {Ag}},\{E_1,...,E_n\})\) we say that \(a\in A\) is strongly accepted by agent j iff \(a\in A_j\) and a is strongly accepted w.r.t. \((A_j,R_j)\). As an example, argument b of Fig. 1 is strongly accepted by 1 and 2, and argument a is strongly rejected by both agents.

4 Encoding argumentative notions

Logical languages are a general tool to describe mathematical structures, and multi-agent AFs are one of these. Compared to others, a propositional language has minimal descriptive power though.Footnote 13 However, it turns out that, in the finite case, its expressivity is sufficient for our purpose to encode the notions introduced in the previous section.Footnote 14 Furthermore, since we construct a Kripke semantics where multi-agent AFs are states (Sect. 5), a propositional language provides a natural fit with the techniques of epistemic logic.

The set of propositional variables \({\mathcal {V}}^{A}_{\textsf {Ag}}\), where \(A\) is a set of arguments (intuitively, the domain of the UAF) and \(\textsf {Ag}\) is a set of agents, is defined as the union of the following sets:

$$\begin{aligned}&{{\mathcal {A}}}{{\mathcal {T}}}:=\{a \leadsto b \mid (a,b) \in A\times A\} \text {;}\\&{\mathcal {O}}:=\{\textsf {aw}_i(a)\mid i\in \textsf {Ag}, a \in A\}\text {;}\quad \text {and} \\&{\mathcal {B}}:=\{a {\upepsilon }E_k \mid a \in A, E_k \subseteq A\}. \end{aligned}$$

Each variable \(a \leadsto b\) reads “argument a attacks b” and \(\textsf {aw}_i(a)\) stands for “agent i is aware of a”. The informal reading of the third kind of variables \(a {\upepsilon }E_k\) is “argument a belongs to subset \(E_k\)”. These variables are needed because the definition of (fine-grained) justification status quantifies over sets (Definition 4). The language \({\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}})\) is built from \({\mathcal {V}}^{A}_{\textsf {Ag}}\) using Boolean functors \(\lnot , \wedge , \vee \), \(\rightarrow \) and \(\leftrightarrow \) as usual. A given \(\textsf {MAF}=(A,R, \{A_i\}_{i \in \textsf {Ag}},\{E_1,...,E_n\})\) determines unequivocally its associated set of variables \({\mathcal {V}}^{A}_{\textsf {Ag}}\).

The semantics of this propositional language is defined, as standard, by means of valuations of its propositional variables. Given a valuation \(v \subseteq {\mathcal {V}}^{A}_{\textsf {Ag}}\) and a propositional variable p, we say that p is true at v iff \(p \in v\). A valuation recursively determines the truth value of any formula \(\varphi \in {\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}})\) in the usual way. \(v\vDash \varphi \) stands for “\(\varphi \) is true at v”.

Definition 5

(Associated valuation and theory of a MAF) Given \(\textsf {MAF}=(A,R, \{A_i\}_{i \in \textsf {Ag}},\{E_1,...,E_n\})\), we define its unequivocally associated valuation as \(v_{\textsf {MAF}}:=\{a \leadsto b \mid (a,b)\in R\}\cup \{\textsf {aw}_i(a)\mid a \in A_i\}_{i \in \textsf {Ag}}\cup \{a {\upepsilon }E_k\mid a \in E_k \quad \text {for every} \quad 1 \le k \le n\}\). Furthermore, the following Boolean formula \({\textsf {Th}_{\textsf {MAF}}}\), called the theory of \(\textsf {MAF}\), encodes \(\textsf {MAF}\), in the sense that \(v_{\textsf {MAF}}\) is the unique valuation such that \(v_{\textsf {MAF}} \vDash {\textsf {Th}_{\textsf {MAF}}}\):

$$\begin{aligned}&{\textsf {Th}_{\textsf {MAF}}}:= \bigwedge _{(a,b)\in R} a \leadsto b \wedge \bigwedge _{(a,b)\notin R} \lnot (a \leadsto b) \quad \wedge \bigwedge _{1 \le k \le n}\Big (\bigwedge _{a \in E_k}a {\upepsilon }E_k\wedge \bigwedge _{a \notin E_k} \lnot a {\upepsilon }E_k\Big ) \\&\quad \wedge \bigwedge _{i \in \textsf {Ag}}\Big ( \bigwedge _{a \in A_i} \textsf {aw}_i(a) \wedge \bigwedge _{a\notin A_i} \lnot \textsf {aw}_i(a) \Big ). \end{aligned}$$

Example 2

Let \(\textsf {MAF}_0=(A,R, \{A_1, A_2\},\{E_1,E_2,E_3,E_4\})\) s.t. \(A=\{a,b\}\), \(R= \{(b,a)\}\), \(A_1=\{a\}\), \(A_2=\{b\}\), \(E_1=\emptyset \), \(E_2=\{a\}\), \(E_3=\{b\}\), \(E_4=\{a,b\}\); we have that \({\textsf {Th}_{\textsf {MAF}}}_{0}=\lnot a \leadsto a \wedge \lnot a \leadsto b \wedge b \leadsto a \wedge \lnot b \leadsto b \wedge (\lnot a {\upepsilon }E_1 \wedge \lnot b {\upepsilon }E_1)\wedge (a {\upepsilon }E_2 \wedge \lnot b {\upepsilon }E_2) \wedge (\lnot a {\upepsilon }E_3 \wedge b {\upepsilon }E_{3}) \wedge (a {\upepsilon }E_4\wedge b {\upepsilon }E_{4}) \wedge \textsf {aw}_1(a) \wedge \lnot \textsf {aw}_1(b) \wedge \lnot \textsf {aw}_2(a) \wedge \textsf {aw}_2(b) \).

For what follows it is relevant to note that not every valuation is a valuation for a MAF. The reason is that subset variables may fail to represent a proper enumeration of subsets, in the sense of the following definition.

Definition 6

Let \(A\) be a finite set of arguments with \(|\wp (A)|=n\), we say that a valuation \(v\subseteq {\mathcal {V}}^{A}_{\textsf {Ag}}\) represents an enumeration of \(\wp (A)\) iff for all km: \(1 \le k<m \le n\) it holds that \(\{x\in A\mid x {\upepsilon }E_k \in v\}\ne \{x\in A\mid x {\upepsilon }E_m \in v\}\).

The inequality of two sets \(E_k\) and \(E_m\) can be expressed in our propositional language:

$$\begin{aligned} E_k{\mathop {\ne }\limits ^{\bullet }}E_m:= \bigvee _{ x \in A} \lnot (x {\upepsilon }E_k \leftrightarrow x {\upepsilon }E_m). \end{aligned}$$

This allows to encode the representation of an enumeration by the following formula

$$\begin{aligned} \bigwedge _{1 \le k< m\le n} E_k{\mathop {\ne }\limits ^{\bullet }}E_m \quad \text {(subset enumeration).} \end{aligned}$$

Clearly, for any \(\textsf {MAF}\) it holds that

$$\begin{aligned} v_{\textsf {MAF}} \vDash \bigwedge _{1 \le k< m\le n} E_k{\mathop {\ne }\limits ^{\bullet }}E_m \end{aligned}$$

Most importantly, based on this language and semantics we can provide encodings for the relevant notions introduced in the previous section, as by the following list:

  • \(E_k\sqsubseteq E_l:= \bigwedge _{a \in A}(a {\upepsilon }E_k\rightarrow a {\upepsilon }E_{l})\),

  • \(E_k\sqsubset E_l:= E_k\sqsubseteq E_l \wedge \bigvee _{a \in A}(a {\upepsilon }E_{l}\wedge \lnot a {\upepsilon }E_k)\),

  • \(\textsf {conf\_free}_i(E_k):=\bigwedge _{a \in A}\Bigg (a {\upepsilon }E_k\rightarrow \Big ( \textsf {aw}_i(a) \wedge \lnot \bigvee _{b\in A}(b {\upepsilon }E_{k}\wedge b \leadsto a)\Big ) \Bigg )\),

  • \(\textsf {complete}_i(E_k):=\textsf {conf\_free}_i(E_k)\wedge \bigwedge _{a \in A}\Bigg (a {\upepsilon }E_k\leftrightarrow \bigwedge _{b\in A}\Big (\big ( \textsf {aw}_i(b) \wedge b \leadsto a \big )\rightarrow \bigvee _{c\in A}(c{\upepsilon }E_k \wedge c\leadsto b)\Big )\Bigg )\),

  • \(\textsf {preferred}_i(E_k):=\textsf {complete}_i(E_k) \wedge \lnot \bigvee _{1 \le l \le n}\big (\textsf {complete}_i(E_l) \wedge (E_k \sqsubset E_l) \big )\),

  • \(\textsf {stracc}_i(a):= \bigwedge _{1 \le k \le n}\Big (\textsf {preferred}_i(E_k)\rightarrow a {\upepsilon }E_k\Big )\),

  • \(\textsf {wekacc}_i(a):= \bigvee _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \wedge a {\upepsilon }E_k\big ) \wedge \bigvee _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \wedge \lnot a {\upepsilon }E_k\big ) \wedge \lnot \bigvee _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \wedge \bigvee _{b\in A}(b {\upepsilon }E_{k}\wedge b \leadsto a)\big )\),

  • \(\textsf {strrej}_i(a):=\bigwedge _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \rightarrow \bigvee _{b \in A}(b {\upepsilon }E_{k} \wedge b \leadsto a)\big )\),

  • \(\textsf {wekrej}_i (a):=\bigvee _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \wedge \bigvee _{b \in A}(b {\upepsilon }E_{k} \wedge b \leadsto a)\big ) \wedge \bigvee _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \wedge \bigwedge _{b \in A}(b {\upepsilon }E_{k} \rightarrow \lnot b \leadsto a)\big ) \wedge \bigwedge _{1 \le k \le n}\big (\textsf {preferred}_i(E_k) \rightarrow \lnot a {\upepsilon }E_k\big )\), and

  • \(\textsf {border}_i(a):= \lnot \textsf {stracc}_i(a) \wedge \lnot \textsf {wekacc}_i(a)\wedge \lnot \textsf {strrej}_i(a) \wedge \lnot \textsf {wekrej}_i(a) \).

The shorthand \(E_k\sqsubseteq E_l\) (resp. \(E_k\sqsubset E_l\)) stands for “\(E_k\) is a subset (resp. a proper subset) of \(E_l\)”. \(\textsf {conf\_free}_i(E_k)\) (resp. \(\textsf {complete}_i(E_k)\), \(\textsf {preferred}_i(E_k)\)) means “the set \(E_k\) is conflict-free (resp. complete, preferred) for agent i (i.e. w.r.t. \((A_i,R_i)\))”. \(\textsf {stracc}_i(a)\) encodes “argument a is strongly accepted by agent i” (Definition 4). Analogously, \(\textsf {wekacc}_i(a)\), \(\textsf {strrej}_i(a)\), \(\textsf {wekrej}_i (a)\) and \(\textsf {border}_i(a)\) stand respectively for “argument a is weakly accepted, strongly rejected, weakly rejected, borderline for agent i”.Footnote 15

The following proposition shows that our encoding is sound, following the satisfiability approach of Besnard et al. (2014), in the sense that \(\textsf {MAF}\) has a given property if and only if its encoding is true at \(v_{\textsf {MAF}}\).

Proposition 1

Let \(\textsf {MAF}=(A,R, \{A_i\}_{i \in \textsf {Ag}},\{E_1,...,E_n\})\) be a MAF, let \({\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}})\) be the propositional language for \(\textsf {MAF}\). The following holds, where \(1\le k,\!l \le n\), \(i \in \textsf {Ag}\), and \(a \in A\):

  1. 1.

    \(v_{\textsf {MAF}}\vDash E_k\sqsubseteq E_l\) (resp. \(v_{\textsf {MAF}}\vDash E_k\sqsubset E_l)\) iff \(E_k\subseteq E_l\) (resp. \(E_k \subset E_l)\).Footnote 16

  2. 2.

    \(v_{\textsf {MAF}}\vDash \textsf {conf\_free}_i(E_k)\) iff \(E_k\) is conflict free w.r.t. \((A_i,R_i)\) (that is, iff \(E_k\subseteq A_i\) and \(E_k\) is conflict-free).

  3. 3.

    \(v_{\textsf {MAF}} \vDash \textsf {complete}_i(E_k)\) iff \(E_k\) is complete w.r.t. \((A_i,R_i)\).

  4. 4.

    \(v_{\textsf {MAF}} \vDash \textsf {preferred}_i(E_k)\) iff \(E_k\) is preferred w.r.t. \((A_i,R_i)\).

  5. 5.

    \(v_{\textsf {MAF}} \vDash \textsf {stracc}_i(a)\) (resp. \(\textsf {wekacc}_i(a)\), \(\textsf {wekrej}_i(a)\), \(\textsf {strrej}_i(a)\), \(\textsf {border}_i(a)\)) iff a is strongly accepted (resp. weakly accepted, weakly rejected, strongly rejected, borderline) by i.

Proof

See “Appendix A1”. \(\square \)

As mentioned, this is a fundamental step to talk about goals of communication, when these involve the justification status of a specific argument (the issue of the debate) that the speaker wants to induce in the hearer.

5 Epistemic logics for abstract argumentation

As our initial example shows, agents need to form beliefs about the awareness set of other agents, and these beliefs may be more or less accurate. Agents may also have different capacities to detect whether an argument attacks another. To reason about agents’ uncertainty we need to expand our language with epistemic modalities \(\square _i\), which stand for “agent i believes that” or sometimes “agent i knows that”. For reasons explained below, we do not need to choose between the two readings at this stage.

Definition 7

Formulas of the language \({\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}},\square )\) are given by the following grammar:

$$\begin{aligned} \varphi {:}{:}{=} p\mid \lnot \varphi \mid (\varphi \wedge \varphi ) \mid \square _i \varphi \qquad p \in {\mathcal {V}}^{A}_{\textsf {Ag}}, i \in \textsf {Ag}\end{aligned}$$

Other Boolean connectives (\(\vee \), \(\rightarrow \), \(\leftrightarrow \)) and constants (\(\top \), \(\perp \)) are defined as usual and \(\lozenge _i\) is defined as \(\lnot \square _i \lnot \) with the informal meaning “agent i consider epistemically possible that...”. In some axiomatisations, we will make use of the mutual belief (knowledge) modality, defined as \(\square _{\textsf {Ag}}\varphi :=\bigwedge _{i\in \textsf {Ag}}\square _i\varphi \), which reads “everyone in \(\textsf {Ag}\) believes (knows) that \(\varphi \)”.

Standard Kripke-style semantics, where states are MAFs over a given set \(A\), provides a natural interpretation of this language and allows to model uncertainty about other agents’ information and about the presence of attacks. Uncertainty is captured by the accessibility of different states. Intuitively, each state is an alternative of the actual MAF, based on the same pool of arguments \(A\) and the same enumeration of its subsets, but with possibly different objective attacks, and where agents may be aware of different arguments. We name them epistemic argumentative models and define them as follows.

Definition 8

(Model) An epistemic argumentative model (\({{\mathcal {E}}}{{\mathcal {A}}}\)-model) for \({\mathcal {V}}^{A}_{\textsf {Ag}}\) is a tuple \(M=(W,{\mathcal {R}},V)\). Here, \(W\ne \emptyset \) is a set of states, \({\mathcal {R}}{:}\,\textsf {Ag}\rightarrow \wp (W\times W)\) is a function assigning an epistemic accessibility relation \({\mathcal {R}}_i\) to each agent \(i\in \textsf {Ag}\), and \(V{:}\,{\mathcal {V}}^{A}_{\textsf {Ag}} \rightarrow \wp (W)\) is a valuation function. We denote by \({\hat{V}}:W\rightarrow \wp ({\mathcal {V}}^{A}_{\textsf {Ag}})\) the dual valuation function of V, which is defined as \({\hat{V}}(u):=\{p \in {\mathcal {V}}^{A}_{\textsf {Ag}}\mid u \in V(p)\}\). We also denote by \(A_i(w):=\{a\in A\mid w \in V(\textsf {aw}_i(a))\}\) the awareness set of agent i at world w.Footnote 17 Similarly we define the set of attacks that hold at w as \(R(w):=\{(a,b)\in A\times A \mid w \in V(a\leadsto b)\}\). The valuation V should satisfy the following additional constraints:

\(\textsf {ER}\):

for some \(w \in W\), \({\hat{V}}(w)\) represents an enumeration of \(\wp (A)\) (see Definition 6)    (enumeration representation);

\(\textsf {SU}\):

for every \(a{\upepsilon }E_{k} \in {\mathcal {B}}\): \(V(a {\upepsilon }E_k)=W\) or \(V(a {\upepsilon }E_k)=\emptyset \)    (subset uniformity).

The class of all \({{\mathcal {E}}}{{\mathcal {A}}}\)-models is denoted by \({{\mathcal {E}}}{{\mathcal {A}}}\). When no confusion is possible we simply refer to them as models.

Condition ER guarantees that some state w in the model has an unequivocally associated \(\textsf {MAF}_w:=(A,R(w),A_1(w), \dots ,A_n(w),\{E_1,\ldots ,E_n\})\) s.t. \({\hat{V}}(w)=v_{\textsf {MAF}_w}\). Condition SU guarantees that the enumeration of subsets is constant over the whole model. Taken together, ER and SU guarantee that every state \(u\in W\) is unequivocally associated with \(\textsf {MAF}_u=(A,R(u),A_1(u), \dots , A_n(u),\{E_1,\ldots ,E_n\})\), where the only elements that vary with respect to \(\textsf {MAF}_w\) are R(u) and \(A_i(u)\) for \(i\in \{1,\dots , n\}\).

Given \(M=(W,{\mathcal {R}},V)\) we sometimes denote W by M[W]. A pointed model is a pair (Mw) where \(w \in M[W]\) is a specific world representing the actual state of affairs. A pointed model for a given \(\textsf {MAF}\) is just a pointed model \(((W,{\mathcal {R}},V),w)\) such that \({\hat{V}}(w)=v_{\textsf {MAF}}\). As for the interpretation of formulas, truth in pointed models is defined recursively as usual:

Definition 9

(Truth) Given an \({{\mathcal {E}}}{{\mathcal {A}}}\)-model \(M=(W,{\mathcal {R}},V)\) and a state \(w \in W\), define the relation \(\vDash \) as the smallest one satisfying the following clauses:

figure a

Note that, given a pointed model (Mw) for \(\textsf {MAF}\), it holds that \(M,w\vDash {\textsf {Th}_{\textsf {MAF}}}\). Let \({\mathcal {C}}\) be a class of models, a formula \(\varphi \) is said to be valid in \({\mathcal {C}}\), denoted as \(\vDash _{\mathcal {C}} \varphi \) iff \(\forall M \in {\mathcal {C}}, \forall w \in M[W]{:}\, M,w\vDash \varphi \). A formula \(\varphi \) is said to be \({\mathcal {C}}\)-consequence of a set \(\varGamma \), denoted as \(\varGamma \vDash _{{\mathcal {C}}}\varphi \) iff \(\forall M \in {\mathcal {C}}, \forall w \in M[W]{:}\, M,w\vDash \varGamma \quad \text {implies} \quad M,w\vDash \varphi \).

Remark 1

(Unawareness of attacks) Note that, according to Definition 8, it is possible to build a model M with a world \(w\in M[W]\) at which: 1. agent i is not aware of a (i.e. \(w \notin V(\textsf {aw}_i(a)\)) and 2. she considers possible a state u (i.e. \(w{\mathcal {R}}_i u\)) at which \(a\leadsto b\) holds (i.e. \(u\in V(a\leadsto b)\)). Although this could seem a defect of Definition 8, it is not. The key is that, in the intended interpretation of \({{\mathcal {E}}}{{\mathcal {A}}}\)-models, once sound and (locally) complete awareness of attacks (\(\textsf {SCAA}\)) is assumed, i is simply not aware of the attack \(a\leadsto b\) (although this attack holds at u as a matter of fact). More formally, recall that, since we are assuming \(\textsf {SCAA}\), we use \(R_i\) defined as \(R\cap (A_i \times A_i)\) to denote the attacks that agent i is aware of in a multi-agent AF. Note that \(R_i\) can be easily captured in our object language as \(a\leadsto _i b:=a \leadsto b \wedge \textsf {aw}_i(a)\wedge \textsf {aw}_i(b)\), and then we have \(M,u\nvDash a \leadsto _i b \). However, we do not need to make this distinction explicit in our object language, since it is already captured in the syntactic definitions of solution concepts/justification status for a given agent (see \(\textsf {complete}_i\), \(\textsf {preferred}_i\), \(\textsf {stracc}_i\), etc in page 12).

General \({{\mathcal {E}}}{{\mathcal {A}}}\)-models tell us very little about the constraints on agents’ awareness of arguments and attacks. Even if \(\textsf {SCAA}\) holds at every point, agents may still be uncertain about attacks if they are not able to distinguish between two points with radically different underlying universal frameworks, as in the following example:

Example 3

Figure 2 depicts a pointed \({{\mathcal {E}}}{{\mathcal {A}}}\)-model \((M_0, w_0)\) for the single-agent argumentation framework \((A, R, A_1, \{E_1,\dots ,E_8\})\), where \(A=\{b,c,d\}\), \(R=\{(c,b)\}\), \(A_1=\{b,c,d\}\), \(E_1 = \emptyset \), \(E_2 =\{b\}\), \(E_3 =\{c\}\), \(E_4 =\{d\}\), \(E_5 =\{b,c\}\), \(E_6 =\{c,d\}\), \(E_7 =\{b,d\}\) and \(E_8 =\{b,c,d\}\). Here, the valuation is as indicated by Fig. 2 and the enumeration, i.e. \({\hat{V}}(w_0)= \{c \leadsto b\}\cup \{\textsf {aw}_1(b), \textsf {aw}_1(c), \textsf {aw}_1(d) \} \cup \{x {\upepsilon }E_k \mid x \in E_k, 1\le k \le 8\}\) and \({\hat{V}}(w_1)= \{d \leadsto b\}\cup \{\textsf {aw}_1(b), \textsf {aw}_1(c), \textsf {aw}_1(d) \} \cup \{x {\upepsilon }E_k \mid x \in E_k, 1\le k \le 8\}\). Note that the valuation of attack variables is not uniform. The reader can check satisfiability of some interesting facts as \(M_0,w_0\vDash \lnot \square _1(c \leadsto b) \wedge \lnot \square _1 (d\leadsto b) \wedge \square _1 \textsf {strrej}_{1}(b)\). Informally, agent 1 is not sure about which argument attacks b but he knows that its justification status is strong rejection. To see that the last part of the conjunction is true, note that both \(\textsf {MAF}_{w_0}\) and \(\textsf {MAF}_{w_1}\) have a unique preferred solution i.e. \(\{c,d\}\) and that, in both cases, it attacks b, hence \(M_0\vDash \textsf {strrej}_{1}(b) \) holds by applying Definition 4 and Proposition 1.

Fig. 2
figure 2

A single-agent pointed \({{\mathcal {E}}}{{\mathcal {A}}}\)-model \((M_0, w_0)\) for the single-agent argumentation framework \((A, R, A_1, \{E_1,\dots ,E_8\})\), capturing uncertainty about the attack relation. The actual world \(w_0\) is within a double-line frame

\({{\mathcal {E}}}{{\mathcal {A}}}\)-models can then be seen as minimal semantic devices for joint reasoning about argumentation and epistemic attitudes. We qualify them as minimal because they capture no assumption about the reasoning/awareness introspection capabilities of the formalised agents. We shall mostly focus on particular subclasses of \({{\mathcal {E}}}{{\mathcal {A}}}\) which incrementally combine additional constraints.

Definition 10

(Properties of models) Let \(M \in {{\mathcal {E}}}{{\mathcal {A}}}\), \(i,j \in \textsf {Ag}\), \(w,u \in M[W]\), \(a,b \in A\), and \(a \leadsto b \in {{\mathcal {A}}}{{\mathcal {T}}}\). We say that M satisfies:

\(\textsf {AU}\):

(attack uniformity) iff \(V(a\leadsto b)= W\) or \(V(a \leadsto b)=\emptyset \);

\(\textsf {PIAw}\):

(positive introspection of awareness) iff \(w {\mathcal {R}}_i u\), then \(A_i(w)\subseteq A_i(u)\);

\(\textsf {NIAw}\):

(negative introspection of awareness) iff \(w {\mathcal {R}}_i u\), then \(A_{i}(u)\subseteq A_i(w)\); and

\(\textsf {GNIAw}\):

(generalized negative introspection of awareness) iff \(w {\mathcal {R}}_i u\), then \(A_{j}(u)\subseteq A_i(w)\).

Condition AU amounts to assuming that attacks are the same through all the states and therefore \(\textsf {SCAA}\) is common knowledge (belief). PIAw and NIAw are adapted versions of the introspective properties for general awareness (Fagin and Halpern 1987). Condition PIAw dictates that if one is aware of a specific argument, then he cannot consider it possible that he is not. Conversely, NIAw amounts to saying that if one is not aware of a specific argument then he cannot think it possible that he is. They are respectively captured by axioms \(\textsf {aw}_i(a)\rightarrow \square _i \textsf {aw}_i(a)\) and \(\lnot \textsf {aw}_i(a) \rightarrow \square _i \lnot \textsf {aw}_i(a)\). GNIAw is a stronger constraint, saying that if one is not aware of a specific argument then he cannot think it possible that other agents are, and therefore NIAw is just a special case of GNIAw.Footnote 18GNIAw is captured by the axiom \(\lnot \textsf {aw}_i (a)\rightarrow \square _i \lnot \textsf {aw}_j(a)\) or, maybe more intuitively, by its contrapositive \(\lozenge _i \textsf {aw}_j(a)\rightarrow \textsf {aw}_i(a)\).

We denote by \({\mathcal {A}}o{\mathcal {A}}\) (awareness of arguments) the class of all \({{\mathcal {E}}}{{\mathcal {A}}}\)-models satisfying AU, PIAw and GNIAw and refer to its elements as \({\mathcal {A}}o{\mathcal {A}}\)-models. Clearly, the one in Fig. 2 is not a \({\mathcal {A}}o{\mathcal {A}}\)-model. However, the class \({\mathcal {A}}o{\mathcal {A}}\) is general enough to subsume scenarios like our Example 1.

Example 4

Figure 3 represents a pointed \({\mathcal {A}}o{\mathcal {A}}\)-model (\(M_1, w_0\)) capturing the relevant epistemic features of Example 1. We assume that \((M_1,w_0)\) is an \({\mathcal {A}}o{\mathcal {A}}\)-model for the MAF of Fig. 1, that is \({\hat{V}}(w_0)=v_{\textsf {MAF}}\). Again, we assume some enumeration E of the set \(\wp (A)\) to be given and that the valuation of \(M_1\) represents that enumeration. Condition AU in the definition of \({\mathcal {A}}o{\mathcal {A}}\)-models allows dispensing with the graphical representation of the valuation of attack variables (as far as we keep in mind what is the underlying universal framework), since attack variables are uniform throughout the model. In the case of model \(M_1\), depicted in Fig. 3, we assume that V matches the structure of the UAF of Fig. 1, i.e. \(R(w)=\{(b,a),(d,b),(e,d),(c,b),(f,c)\}\) for every \(w\in M_1[W]\). Moreover, following Schwarzentruber et al. (2012), we represent in a compact way also the valuation of awareness variables; e.g. \(1{:}\,\{a,b,c\}\) inside the \(w_3\)-rectangle means that \(A_1(w_3)=\{a,b,c\}\) or, equivalently, that \(w_3\in V(\textsf {aw}_1(a)), w_3\in V(\textsf {aw}_1(b)), w_3\in V(\textsf {aw}_1(c))\) and \(\forall x \in A{\setminus } \{a,b,c\}{:}\,w_3 \notin V(\textsf {aw}_1(x))\). The reader can check that \(M_1,w_0\vDash \square _1 \textsf {strrej}_1(a) \wedge \square _2 \textsf {strrej}_2 (a)\), i.e. both agents agree about the justification status of a. However, this agreement is based on different reasons: Agent’s 1 strong rejection is based on full awareness of the universal framework and is therefore not defeasible, while Agent’s 2 rejection is based on partial awareness and is defeasible by new information.

Fig. 3
figure 3

A pointed \({\mathcal {A}}o{\mathcal {A}}\)-model (\(M_1\), \(w_0\)) representing the agents’ uncertainty in Example 1. The actual world \(w_0\) is within a double-line frame

We did not discuss specific properties of \({\mathcal {R}}_i\) so far, since we want to provide a comprehensive approach, taking into account both knowledge and belief. Moreover, there is no universal agreement about the properties of both notions.Footnote 19 We do not intend to take a stand in this debate and we are content to show that the different constraints on \({\mathcal {R}}_i\) do not pose any technical problem for completeness. Accordingly, given a class of models \({\mathcal {C}}\), we denote by \({\mathcal {S}}4({\mathcal {C}})\) (resp. \({\mathcal {S}}5({\mathcal {C}})\), \({{\mathcal {K}}}{{\mathcal {D}}}45({\mathcal {C}})\)) the subclass of \({\mathcal {C}}\) where every \({\mathcal {R}}_i\) is a preorder (resp. an equivalence relation; a serial, transitive and euclidean relation).

We now provide sound and strongly complete axiomatistions for the relevant classes of models. Let us first define the corresponding proof systems:

Definition 11

(Proof systems)

  • \(\textsf {EA}\) is the proof system containing all instances of Taut, K, PIS (positive introspection of subsets), NIS (negative introspection of subsets), ER (enumeration representation) and both inference rules from Table 1.Footnote 20\(\textsf {S4}(\textsf {EA})\) (resp. \(\textsf {S5}(\textsf {EA})\), \(\textsf {KD45}(\textsf {EA})\)) extends \(\textsf {EA}\) with axioms T and 4 (resp. T, 4 and 5; D, 4 and 5) from Table 1.

  • \(\textsf {AoA}\) (Awareness of Arguments) is the system extending \(\textsf {EA}\) with PIAt (positive introspection of attacks), NIAt (negative introspection of attacks), PIAw (positive introspection of awareness) and GNIAw (generalized negative introspection of awareness).Footnote 21\(\textsf {S4}(\textsf {AoA})\) (resp. \(\textsf {S5}(\textsf {AoA})\), \(\textsf {KD45}(\textsf {AoA})\)) extends \(\textsf {AoA}\) with axioms T and 4 (resp. T, 4 and 5; D, 4 and 5) from Table 1.

Let \({\textsf {L}}\) be any of the proof systems defined above, we denote by \({\mathcal {C}}^{{\textsf {L}}}\) the corresponding class of models according to Table 2. For instance \({\mathcal {C}}^{\textsf {S4}(\textsf {AoA})}={\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}})\).

Table 1 Axioms for \({{\mathcal {E}}}{{\mathcal {A}}}\)-models and \({\mathcal {A}}o{\mathcal {A}}\)-models capturing different constraints on \({\mathcal {R}}_i\)
Table 2 Proofs systems (first column), their corresponding class of models (second column), and the axioms they contain (third column)

Theorem 1

Let \({\textsf {L}}\) be any of the proof systems defined above, then \({\textsf {L}}\) is sound and strongly complete w.r.t. \({\mathcal {C}}^{{\textsf {L}}}\).

Proof

See “Appendix A2”. \(\square \)

Although the details of the proof are left for the “Appendix”, some remarks are in order. Soundness results are straightforward by induction on the length of derivations, given that all axioms are valid and that rules preserve validity (in their corresponding class of models). Strong completeness will be proved using the canonical model technique. Note however that the canonical model for \(\textsf {EA}\) is not an \({{\mathcal {E}}}{{\mathcal {A}}}\)-model—hence this problem is inherited by every system extending \(\textsf {EA}\). More concretely, SU is violated by the canonical model for \(\textsf {EA}\). This inconvenience is circumvented by taking its generated submodels, which, thanks to the constraints encoded by PIS and NIS, turn out to be \({{\mathcal {E}}}{{\mathcal {A}}}\)-models (see Theorem 7.3. of Blackburn et al. 2002 for a similar proof). Furthermore, truth is preserved under generated submodels for our language and semantics, just as in the general modal case (Blackburn et al. 2002, Prop. 2.6.), even if we are not working with normal modal logics in the sense of Blackburn et al. (2002), because the rule of uniform substitution is not sound here.

6 Epistemic and argumentative dynamics

Standard approaches to the dynamics of AFs focus almost exclusively on changes generated by addition and deletion of arguments and/or attacks, leaving epistemic updates aside (Doutre and Mailly 2018). Here, we present a framework where both dynamics (epistemic and argumentative) are encompassed. Moreover, this framework allows reasoning about different communication moves and complex information updates. For presentational purposes, we focus on completeness results for dynamic extensions of \(\textsf {EA}\), \(\textsf {AoA}\), \(\textsf {S4}(\textsf {AoA})\), \(\textsf {KD45}(\textsf {AoA})\) and, semantically, on transformations of the corresponding classes of models. Completeness proofs and conceptual considerations concerning the dynamic extensions of other systems can be easily extrapolated and are therefore not discussed. The main technical idea of our dynamic approach is to use event models (Baltag et al. 2016; Baltag and Moss 2004) enriched with propositional assignments or substitutions (van Benthem et al. 2006; van Ditmarsch and Kooi 2008) to capture both kind of dynamics.Footnote 22 A key notion to define these models is that of propositional substitution.

Definition 12

(Substitutions) A propositional \(\textsf {EA}\)-substitution (or an \(\textsf {EA}\)-substitution, for short) is a function \(\sigma : {\mathcal {V}}^{A}_{\textsf {Ag}}\rightarrow {\mathcal {V}}^{A}_{\textsf {Ag}}\cup \{\perp ,\top \}\) s.t.:

  1. (i)

    for every \(p \in {\mathcal {B}}\) it holds that \(\sigma (p)= p\) (i.e. subset variables are not substituted); and

  2. (ii)

    for every \(p \in {{\mathcal {A}}}{{\mathcal {T}}}\cup {\mathcal {O}}\) either \(\sigma (p)=p\) or \(\sigma (p)=\top \) or \(\sigma (p)=\perp \).Footnote 23

We use \(\textsf {SUB}^{\textsf {EA}}\) to denote the set of all \(\textsf {EA}\)-substitutions, and \(\lambda \) to denote the identity substitution. Moreover, an \(\textsf {AoA}\)-substitution is an \(\textsf {EA}\)-substitution s.t.:

  1. (iii)

    for every \(p \in {{\mathcal {A}}}{{\mathcal {T}}}\) it holds that \(\sigma (p)=p\) (persistence of attacks).

We use \(\textsf {SUB}^{\textsf {AoA}}\) to denote the set of all \(\textsf {AoA}\)-substitutions.

Intuitively, condition i ensures that the enumeration is kept fixed under update.Footnote 24 In the general case of \(\textsf {EA}\)-substitutions, condition ii allows to modify both awareness and attack variables. The modification of awareness variables corresponds to addition or deletion of arguments from the agents’ awareness set. Modification of attack variables is of interest in order to contextualize other formalisms we deal with in Sect. 8. Since modification of attacks is not relevant for our main focus, we will mostly deal with \(\textsf {AoA}\)-substitutions, where this is forbidden by condition iii. We can also represent \(\textsf {EA}\)-substitutions (resp. \(\textsf {AoA}\)-substitutions) as maps of the form \(\{p_1 \mapsto *_1,\ldots ,p_n\mapsto *_n\}\) where for every \(0\le k \le n\) we have that: \(p_k\in {{\mathcal {A}}}{{\mathcal {T}}}\cup {\mathcal {O}}\) (resp. \(p_k\in {\mathcal {O}}\)); \(*_k\in \{\top ,\perp \}\); and for every \(0\le m \le n\), \(k\ne m\) implies \(p_k\ne p_m\). With this notion at hand, we define event models as follows:

Definition 13

(Event model) An \({{\mathcal {E}}}{{\mathcal {A}}}\)-event model (or an event model, for short) for a given language \({\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}},\square )\) is a tuple \(E=(S,{\mathcal {T}},\textsf {pre},\textsf {pos})\) where \(S\ne \emptyset \) is a finite set of events; \({\mathcal {T}}{:}\,\textsf {Ag}\rightarrow \wp (S \times S)\) assigns to each agent i an indistinguishability relation \({\mathcal {T}}_i\) between events (intended to represent uncertainty of agent i about which changes are happening); \(\textsf {pre}{:}\, S \rightarrow {\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}},\square )\) is a function assigning a precondition to each event and \(\textsf {pos}{:}\,S \rightarrow \textsf {SUB}^{\textsf {EA}}\) assigns a substitution to each event, indicating its effect on awareness and attacks.

Given an event model \(E=(S,{\mathcal {T}},\textsf {pre},\textsf {pos})\) we sometimes use E[S] to denote S. A pointed event model is a pair (Es) where \(s\in E[S]\). We denote by \(\textsf {ea}\) the class of all \({{\mathcal {E}}}{{\mathcal {A}}}\)-event models. The next definition explains how \({{\mathcal {E}}}{{\mathcal {A}}}\)-models and event models interact through action execution.

Definition 14

(Product update) Given an \({{\mathcal {E}}}{{\mathcal {A}}}\)-model \(M=(W,{\mathcal {R}},V)\) and an event model \(E=(S,{\mathcal {T}},\textsf {pre},\textsf {pos})\), their product update model is defined as \(M\otimes E:=(W',{\mathcal {R}}',V')\) where:

  • \(W':=\{(w,s)\mid M,w \vDash \textsf {pre}(s)\}\);

  • \((w,s){\mathcal {R}}'_i(w',s')\) iff \(w {\mathcal {R}}_i w'\) and \(s {\mathcal {T}}_i s'\); and

  • \(V'(p):= \{(w,s)\in W' \mid M,w\vDash \textsf {pos}(s)(p)\}\).

Informally, product update is meant to provide a new \({{\mathcal {E}}}{{\mathcal {A}}}\)-model where the possible states are pairs (ws), accessibility holds between pairs iff it holds coordinatewise and the valuation of variables is updated according to the substitution labelling s as its postcondition.

Remark 2

Note that if M is an \({{\mathcal {E}}}{{\mathcal {A}}}\)-model, \(M\otimes E\) is not guaranteed to be an \({{\mathcal {E}}}{{\mathcal {A}}}\)-model, since \(W'\) might be empty. To see this, take for instance \(\textsf {pre}(s)=\perp \) for every \(s\in E[S]\). When \(W'\ne \emptyset \), we say that \(M\otimes E\) is defined.

We use the symbols ‘\(\bullet ,\circ ,\bigtriangleup \)’ to name events. Let us now look at two examples of event models.

Example 5

(Public addition of an argument) Let us first consider the event model for the public addition of an argument a, defined as \(\textsf {Pri}^{a}_{i}:=(S,{\mathcal {T}},\textsf {pre},\textsf {pos})\) where:

  • \(S=\{\bigtriangleup \}\),

  • \({\mathcal {T}}_k=\{(\bigtriangleup ,\bigtriangleup )\}\) for every \(k\in \textsf {Ag}\),

  • \(\textsf {pre}(\bigtriangleup )=\top \), and

  • \(\textsf {pos}(\bigtriangleup )=\{\textsf {aw}_k(a)\mapsto \top \mid k \in \textsf {Ag}\}\).

\(\textsf {Pub}^{a}\) is graphically represented in the left-hand side of Fig. 4, for the special case where \(\textsf {Ag}=\{1,2\}\).

Example 6

(Private addition of an argument) We define the event model for i privately adding of an argument a as \(\textsf {Pri}^{a}_{i}:=(S,{\mathcal {T}},\textsf {pre},\textsf {pos})\) where:

  • \(S=\{\bullet ,\circ \}\), where \(\bullet \) is the action of i learning a while \(\circ \) is the “nothing happens” event;

  • if \(k=i\), then \({\mathcal {T}}_k=\{(\bullet ,\bullet ),(\circ ,\circ )\}\); else \({\mathcal {T}}_k=\{(\bullet ,\circ ),(\circ ,\circ )\}\);

  • \(\textsf {pre}(\bullet )=\textsf {pre}(\circ )=\top \); and

  • \(\textsf {pos}(\bullet )=\{\textsf {aw}_i(a)\mapsto \top \}\) and \(\textsf {pos}(\circ )=\lambda \).

In this case, the definition of \({\mathcal {T}}\) captures the intuition of a completely private learning action for i; meaning that, after the execution of \((\textsf {Pri}^{a}_i,\bullet )\) everyone (except i) believes that nothing has happened. \(\textsf {Pri}_1^{a}\) is pictorially represented in the right-hand side of Fig. 4 for the special case \(\textsf {Ag}=\{1,2\}\).

Both event models represent the same (well-studied) action of adding an argument to an argumentation framework (Cayrol et al. 2010), but DEL modelling allows to account for the distinction between public and private communication, thus adding a relevant epistemic dimension.Footnote 25 As an example of the product model execution, Fig. 5 illustrates the operation \(M \otimes \textsf {Pub}^{d}\), that we discuss in Example 7.

Fig. 4
figure 4

Representation of \(\textsf {Pub}^{a}\) (left.) and \(\textsf {Pri}^{a}_1\) (right.) for two agents

More in general, given a set of arguments B, the public addition of the whole set is captured by the action \(\textsf {Pub}^{B}\), which only modifies the definition of \(\textsf {Pub}^{a}\) in that \(\textsf {pos}(\bigtriangleup ):=\{\textsf {aw}_j(b) \mapsto \top \mid b\in B,j \in \textsf {Ag}\}\). Analogously, private addition of B by i is \(\textsf {Pri}^{B}_i\) and works as in Fig. 4 (right) with \(\textsf {pos}(\bullet ):=\{\textsf {aw}_i(b)\mapsto \top \mid b\in B\}\).

The effects of updating \({{\mathcal {E}}}{{\mathcal {A}}}\)-models with actions are described by the following dynamic languages:

Definition 15

(Dynamic languages) Let \({\mathcal {V}}^{A}_{\textsf {Ag}}\) be a set of propositional variables, and let \(\star \subseteq \textsf {ea}\) be a class of event models for \({\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}},\square )\). The formulas of the language \({\mathcal {L}}^{\star }({\mathcal {V}}^{A}_{\textsf {Ag}},\square )\) (or simply \({\mathcal {L}}^{\star }\) when the context is clear) are given by the following grammar:

$$\begin{aligned} \varphi {:}{:}{=} p\mid \lnot \varphi \mid (\varphi \wedge \varphi ) \mid \square _j \varphi \mid [E,s]\varphi \qquad p \in {\mathcal {V}}^{A}_{\textsf {Ag}}, j \in \textsf {Ag}, E \in \star , s \in E[S] \end{aligned}$$

where [Es] reads: “after executing (Es), \(\varphi \) holds”.Footnote 26 We extend the truth relation for the new kinds of formulas as follows:

\(M,w\vDash [E,s]\varphi \qquad \text {iff} \qquad M,w\vDash \textsf {pre}(s) \quad \text {implies} \quad M\otimes E, (w,s) \vDash \varphi \)

The flexibility of event models is well-known. In their actual epistemic-argumentative reading, they can be used to model the effects of acts of information exchange. As mentioned, these acts have two sides: (i) how the hearer decides to update her knowledge base with new information (information update) and (ii) what argument(s) the speaker decides to communicate in order to fulfill his goal (communication moves). In order to persuade their interlocutors, smart players choose (ii) based on their expectations about (i). Let us now illustrate this through the simplest combination of (i) and (ii) of Example 1. A deeper analysis is left for Sect. 7.

Example 7

We assume that (i) is as follows: Mom behaves credulously. This means that whenever Charlie communicates an argument, she simply adds it to her knowledge base. This is modelled through the event model for public addition of an argument (left-hand-side of Fig. 4). As for (ii), we assume that Charlie thinks that Mom is indeed behaving credulously. Recall that Charlie has three options: communicating c, communicating d or communicating \(\{c,d\}\). Hence, his way of selecting the best set of arguments to communicate consists in reasoning about the effects of all options. Note that although one of the three moves will not work (\(M_1,w_0\vDash [\textsf {Pub}^{d},\bigtriangleup ]\textsf {strrej}_2 (a)\), see Fig. 5), Charlie thinks that they are equally good \(M_1,w_0 \vDash \square _1([\textsf {Pub}^{c},\bigtriangleup ] \textsf {stracc}_2(a) \wedge [\textsf {Pub}^{d},\bigtriangleup ] \textsf {stracc}_2(a)\wedge [\textsf {Pub}^{\{c,d\}},\bigtriangleup ] \textsf {stracc}_2(a)) \). Therefore, under these assumptions, the success of Charlie is a simple matter of luck.

Fig. 5
figure 5

Product update \(M_1 \otimes \textsf {Pub}^{d}\) capturing the effects of Charlie communicating d and Mom updating her AF credulously. See Example 4 for an explanation of the graphical representation of \({\mathcal {A}}o{\mathcal {A}}\)-models

Remark 3

Here communication of an argument x to everybody is modelled by the operation of public addition and not, as common in DEL, as the public announcement of the formula \(\textsf {aw}_i(x)\) (where i the speaker) or of \(\bigwedge _{i \in \textsf {Ag}}(\textsf {aw}_{i}(x))\).Footnote 27 The usual event model for public announcement of a formula \(\varphi \) is based on the same single-event structure of Fig. 4 (left), but with \(\varphi \) (instead of \(\top \)) as precondition and with no postconditions. If we did so, agents could never learn arguments whose (collective) awareness is not considered as a doxastic possibility before communication takes place; but this fails to capture what actually happens in most real-life debates. For instance, if we modelled communication of d by Charlie as the public announcement of \(\textsf {aw}_{1}(d)\) (resp. as the public announcement \(\textsf {aw}_{1} (d) \wedge \textsf {aw}_{2} (d)\)) in the previous example, then the beliefs of Mom (resp. everyone) would become inconsistent with no apparent reason.

We give axiomatisations and prove completeness for the dynamic extensions of \(\textsf {EA}\), \(\textsf {AoA}\), \(\textsf {S4}(\textsf {AoA})\) and \(\textsf {KD45}(\textsf {AoA})\).Footnote 28 For this we use reduction axioms and an inside-out reduction (as described e.g. in Wang and Cao 2013). That is to say, we don’t use axioms for event model composition but we show how to eliminate all dynamic operators starting from their innermost occurrences. To do so, we need to prove that the rule of substitution of proven equivalents is sound w.r.t. all the systems considered. From a semantic perspective, this implies showing that the class of models we are working with is closed under product update. It is easy to show that this is in general not the case for \({\mathcal {A}}o{\mathcal {A}}\), \({\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}})\) and \({\mathcal {S}}5({\mathcal {A}}o{\mathcal {A}})\). One of the possible solutions to this shortcoming is to restrict the class of “allowed” event models, so as to ensure that we remain in the targeted class after the execution of the product update.

The general case of updating \({{\mathcal {E}}}{{\mathcal {A}}}\) models with \({{\mathcal {E}}}{{\mathcal {A}}}\) event models does not present any problem. Indeed, \(a {\upepsilon }E_{k}\)-variables do not change their truth values by the constraint i of Definition 12, and this guarantees that subset uniformity (SU) and enumeration representation (ER) are trivially preserved. Updating an \({\mathcal {A}}o{\mathcal {A}}\)-model with an event model that only uses \(\textsf {AoA}\)-substitutions guarantees further that attack uniformity (AU) is preserved by the constraint iii of Definition 12, since \(\leadsto \)-variables are also left untouched. The problem for updating \({\mathcal {A}}o{\mathcal {A}}\)-models lies in the awareness constraints PIAw and GNIAw. We can however provide a set of sufficient conditions for their preservation. For this we need to introduce some additional notation. Let E be an event model and let \(s \in E[S]\); define \(\textsf {pos}_i^{+}(s):=\{a\in A\mid \textsf {pos}(s)(\textsf {aw}_i(a))=\top \}\) and \(\textsf {pos}_i^{-}(s):=\{a\in A\mid \textsf {pos}(s)(\textsf {aw}_i(a))=\perp \}\). Informally, \(\textsf {pos}_i^{+}(s)\) (resp. \(\textsf {pos}_i^{-}(s)\)) denotes the set of arguments gained (resp. lost) by i as a consequence of executing s. Furthermore, let E be an action model, we say that E satisfies:

  • \(\hbox {EM}_{1}\) iff for all \(s,t\in E[S]\): if \(s{\mathcal {T}}_i t\), then \(\textsf {pos}_i^{+}(s)\subseteq \textsf {pos}_i^{+}(t)\) and \(\textsf {pos}_i^{-}(t)\subseteq \textsf {pos}_i^{-}(s)\); and

  • \(\hbox {EM}_{2}\) iff for all \(s,t\in E[S]\): if \(s{\mathcal {T}}_i t\), then \(\forall j \in \textsf {Ag}\): \(\textsf {pos}_i^{-}(s)\subseteq \textsf {pos}_j^{-}(t)\) and \(\textsf {pos}_j^{+}(t)\subseteq \textsf {pos}_i^{+}(s)\).

Let us explain these conditions informally. In an event model satisfying \(\hbox {EM}_{1}\), if we suppose that s is the event that actually happens, then \(\hbox {EM}_{1}\) implies that any event t that agent i cannot tell from s is one where he gains at least the same new arguments and does not loose any argument he actually keeps. It is intuitive to see that \(\hbox {EM}_{1}\) preserves PIAw. Indeed, suppose that i is aware of a after the execution of s (antecedent \(\textsf {aw}_i(a)\) of PIAw). Two things are possible. Either a is a newly acquired argument (by the execution of s). Then, since any state accessible after the update is “filtered” by some indistinguishable event t, the condition \(\textsf {pos}_i^{+}(s)\subseteq \textsf {pos}_i^{+}(t)\) forces a to be acquired at that state too, and therefore the consequent \(\Box _{i}\textsf {aw}_i(a)\) is satisfied. Or else, i was already aware of a before the execution of s, and therefore he has not lost it. Here \(\textsf {pos}_i^{-}(t)\subseteq \textsf {pos}_i^{-}(s)\) guarantees that a is not lost at any state accessible after the execution of the event. An analogous informal reading, generalized to other agents, can be given for \(\hbox {EM}_{2}\): at any indistiguishable event any other agent looses at least the same arguments as i and gains no more. By the same pattern as before we can extrapolate that this condition preserves GNIAw (see Lemma 1 for a detailed proof).

Let us now define some relevant classes of event models:

Definition 16

(Classes of event models) We denote by \(\textsf {em12}\), \(\textsf {emS4}\) and \(\textsf {pure}\) the following classes of event models:

  • \(\textsf {em12}\) is the class of event models satisfying \(\hbox {EM}_{1}\), \(\hbox {EM}_{2}\) and assigning \(\textsf {AoA}\)-substitutions (see Definition 12) to all their events. In other words, \(E=(S,{\mathcal {T}},\textsf {pre},\textsf {pos})\in \textsf {em12}\) iff E satisfies \(\hbox {EM}_{1}\), \(\hbox {EM}_{2}\) and \(\textsf {pos}: S\rightarrow \textsf {SUB}^{\textsf {AoA}}\).

  • \(\textsf {emS4}\) is the subclass of \(\textsf {em12}\) where every \({\mathcal {T}}_i\) is a preorder.

  • \(\textsf {pure}\) is the subclass of \(\textsf {em12}\) s.t. \(\textsf {pre}(s)=\top \) for every \(s \in E[S]\) and every \({\mathcal {T}}_i\) of E is serial, transitive and euclidean.Footnote 29

Remark 4

Note that both \(\textsf {Pub}^{a}\) and \(\textsf {Pri}_i^{a}\) (see Examples 56 and Fig. 4) are purely argumentative event models (i.e. they belong to \(\textsf {pure}\)) and, a fortiori, they also belong to \(\textsf {em12}\).

We can then prove the following result:

Lemma 1

(Closure) Let \(M=(W,{\mathcal {R}},V)\) be an \({{\mathcal {E}}}{{\mathcal {A}}}\)-model and let \(E=(E,{\mathcal {T}},\textsf {pre},\textsf {pos})\) be an event model, then:

  1. (i)

    If \(M\otimes E\) is defined, then \(M\otimes E\in {{\mathcal {E}}}{{\mathcal {A}}}\).

  2. (ii)

    If \(M\in {\mathcal {A}}o{\mathcal {A}}\), \(E \in \textsf {em12}\), and \(M\otimes E\) is defined, then \(M\otimes E \in {\mathcal {A}}o{\mathcal {A}}\).

  3. (iii)

    If \(M \in {\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}})\), \(E \in \textsf {emS4}\), and \(M\otimes E\) is defined, then \(M\otimes E \in {\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}})\).

  4. (iv)

    If \(M \in {{\mathcal {K}}}{{\mathcal {D}}}45({\mathcal {A}}o{\mathcal {A}})\), and \(E \in \textsf {pure}\), then \(M\otimes E \in {{\mathcal {K}}}{{\mathcal {D}}}45({\mathcal {A}}o{\mathcal {A}})\).

Proof

See “Appendix A3”. \(\square \)

Remark 5

(The general value of \(\hbox {EM}_{1}\) and \(\hbox {EM}_{2}\)) If we dispense with the argumentation/awareness interpretation of the current formalism, Lemma 1(ii) tells us that we can look at \(\hbox {EM}_{1}\) and \(\hbox {EM}_{2}\) as general, sufficient conditions that guarantee the preservation of certain constraints over propositional valuations after product update. Therefore, they can be reused in any framework including event models and indexed operators (awareness operators, in our case) ranging over atomic entities (arguments, in our case). As suggested before, PIAw and GNIAw characterize a de re reading of operators ranging over atomic entities. Therefore, \(\hbox {EM}_{1}\) and \(\hbox {EM}_{2}\) are structural event constraints that, taken together, work as a sufficient condition to preserve these de re operators.

Lemma 1 allows to prove the following general preservation result:

Table 3 Reduction axioms for dynamic systems

Lemma 2

(Validity preservation) All axioms instances of Table 3 written in \({\mathcal {L}}^{\textsf {ea}}\) (resp. \({\mathcal {L}}^{\textsf {em12}}\), \({\mathcal {L}}^{\textsf {emS4}}\), \({\mathcal {L}}^{\textsf {pure}})\), are valid in \({{\mathcal {E}}}{{\mathcal {A}}}\) (resp. \({\mathcal {A}}o{\mathcal {A}}\), \({\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}}), {{\mathcal {K}}}{{\mathcal {D}}}45({\mathcal {A}}o{\mathcal {A}})\)) and all applications of SE in \({\mathcal {L}}^{\textsf {ea}}\) (resp. \({\mathcal {L}}^{\textsf {em12}}\), \({\mathcal {L}}^{\textsf {emS4}}\), \({\mathcal {L}}^{\textsf {pure}})\) preserves validity in \({{\mathcal {E}}}{{\mathcal {A}}}\) (resp. \({\mathcal {A}}o{\mathcal {A}}\), \({\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}}), {{\mathcal {K}}}{{\mathcal {D}}}45({\mathcal {A}}o{\mathcal {A}}))\).

Proof

See “Appendix A3”. \(\square \)

General completeness results follow from Lemma 2. Let us first define the targeted axiom systems:

Definition 17

(Dynamic axiom systems)

  • \(\textsf {EA}^{\textsf {ea}}\) extends \(\textsf {EA}\) with all axioms schemes and rules of Table 3 that can be written in \({\mathcal {L}}^{\textsf {ea}}\) (see Definitions 15 and 16).

  • \(\textsf {AoA}^{\textsf {em12}}\) extends \(\textsf {AoA}\) with all axioms schemes and rules of Table 3 that can be written in \({\mathcal {L}}^{\textsf {em12}}\).

  • \(\textsf {S4}(\textsf {AoA})^{\textsf {emS4}}\) extends \(\textsf {S4}(\textsf {AoA})\) with all axioms schemes and rules of Table 3 that can be written in \({\mathcal {L}}^{\textsf {emS4}}\).

  • \(\textsf {KD45}(\textsf {AoA})^{\textsf {pure}}\) extends \(\textsf {KD45}(\textsf {AoA})\) with all axioms schemes and rules of Table 3 that can be written in \({\mathcal {L}}^{\textsf {pure}}\).

Theorem 2

The proof system \(\textsf {EA}^{\textsf {ea}}\) (resp. \(\textsf {AoA}^{\textsf {em12}}\), \(\textsf {S4}(\textsf {AoA})^{\textsf {emS4}}\), \(\textsf {KD45}(\textsf {AoA})^{\textsf {pure}})\) is sound and strongly complete w.r.t. \({{\mathcal {E}}}{{\mathcal {A}}}\) (resp. \({\mathcal {A}}o{\mathcal {A}}\), \({\mathcal {S}}4({\mathcal {A}}o{\mathcal {A}})\), \({{\mathcal {K}}}{{\mathcal {D}}}45({\mathcal {A}}o{\mathcal {A}}))\).

Proof

See “Appendix A3”. \(\square \)

Let us remark that in the case of \(\textsf {KD45}(\textsf {AoA})\) our completeness result is restricted to event models belonging to \(\textsf {pure}\). Although \(\textsf {pure}\) is a rather simple class of event models, all the actions used in our analysis of Example 7, as well as the one that will be used in the next section, fall into it. Unfortunately, modelling certain complex scenarios requires mixing purely argumentative actions with other types—for instance, public and private announcement of formulas, where preconditions are not trivial. One last axiomatisation, inspired by the works of Balbiani et al. (2012) and Aucher (2008), aims to fill this gap. The interested reader can find it in the “Appendix A4” (Theorem 3). The axiomatisation is based on an modal language with a global modality. Interestingly, this more expressive language also allows to provide necessary and sufficient conditions for the preservation of PIAw and GNIAw under product update.

7 Modelling persuasion, sceptic updates and conditional trust

In Example 7 of the previous section we unfolded the dynamics of our running example by assuming that Mom was open to accept whatever Charlie says at face value. In a more likely scenario this does not happen: Mom will filter the information received from Charlie, precisely because she does not trust him in such circumstances. Yet Charlie would still be confident, as kids often are, to be fooling her. Although Mom is not immediately aware of the counter-argument against the Pscience publication, she can obtain it after a quick (private) search on Pscience’s website. It is important to stress that Mom does not discard argument c, she rather accepts it, but eventually finds out the counterargument f. This is a rather common mechanism of epistemic vigilance, one of the kind we mentioned in Sect. 2. One possible way of capturing this epistemic action in our framework is what we call a sceptic update \(\textsf {Scp}_{j}^{x}\), where the recipient j of an argument x privately and non-deterministically learns an attacker of x (if any). When our language contains \(\textsf {Pub}\) and \(\textsf {Pri}\), it is possible to define a modality \([\textsf {Scp}_{j}^{x}]\varphi \), expressing that \(\varphi \) holds after j performs a sceptic update upon receiving argument xFootnote 30:

$$\begin{aligned} {[}\textsf {Scp}_{j}^{x}]\varphi := [\textsf {Pub}^{x},\bigtriangleup ]\left( \bigwedge _{y \in A}(y \leadsto x \rightarrow [\textsf {Pri}_{j}^{y}, \bullet ] \varphi )\right) . \end{aligned}$$

As an example, the bottom part of Fig. 6 represents the outcome of Mom’s sceptic update as the result of two consecutive actions—a public addition of c followed by Mom privately learning f—on the initial \({\mathcal {A}}o{\mathcal {A}}\)-model \(M_1\) (Fig. 6 (top part)). In our model, Charlie thinks that he has succeeded \(M_1,w_0 \vDash [\textsf {Scp}^{c}_{2}] \square _1 \textsf {stracc}_2(a)\), while actually he has not \(M_1,w_0 \vDash [\textsf {Scp}^{c}_{2}] \lnot \textsf {stracc}_2(a)\) and, moreover, agent 2 (Mom) believes all this \(M_1,w_0 \vDash [\textsf {Scp}^{c}_{2}]\square _2 (\square _1 \textsf {stracc}_2(a) \wedge \lnot \textsf {stracc}_2(a))\).

Fig. 6
figure 6

Model M (top part) and model \((M\otimes \textsf {Pub}^{c})\otimes \textsf {Pri}^{f}_{2}\) (bottom part)

We now have two scenarios with substantially different outcomes. The question is how close they reflect the typical behaviour of players in a more or less adversarial exchange. In both cases, we assumed that Charlie is confident that Mom will accept everything he says without further inquiry. Is Charlie the prototype of a skilled debater? Clearly not: He still lacks some mindreading and the subtlety of anticipating easy counterobjections, those skills that kids typically learn to use at an advanced stage of their cognitive development, and after a lot of trial and error. We further assumed that Mom has full trust in the first scenario and absolute distrust in the second. Distrust is driven by epistemic vigilance, but circumstances are not always black or white. After all, there are cases in which she has the right—or even the educational duty—to trust her kid.

We want to put our finger on the fact that trust is most of the time mixed, and it is such in a relevant sense. Not only it varies with the source of information—Mom may trust Charlie and not Dad—or the type of information we get from the source—she may trust Charlie more or less depending on the matter at stake. Trust is often also conditional on the epistemic circumstances we find ourselves in, all other things being equal. In order to see this clearly, we introduce a different example, which we borrow from Kagemusha, a famous film directed by Akira Kurosawa.

Example 8

(Kagemusha) The warlord of the clan Takeda has been killed unbeknownst to everybody except for the members of his clan and his political decoy (kagemusha). It is vital that the warlord’s death stay secret and that his double keeps playing his role. Therefore, everybody outside the clan must be persuaded that a: “the warlord is alive”. The warlord’s funeral is then performed anonymously and in a peculiar way: a jar with the ashes is launched into the lake Suwa on a raft.

Unfortunately, spies from rival clans are around and, by snooping on this strange ritual, they start suspecting the truth, that is they are provided with an evidential argument b that rebuts a. Now, by accident, the spies are spied, in turn, by the kagemusha, who reports this to the rest of the clan. The clan then decides to bake up an alternative (false) explanation of the ritual—an offering of sake to the god of the lake—and to tell it around. This alternative explanation c undercuts the second and reinstates the first. This has the effect of persuading the spies that they were wrong, that the ritual was indeed not a funeral and the warlord is still alive.

As things stand, argument c is de facto undermined by a decisive argument d, to the effect that c does not hold water. But the spies do not find access to d, and the clan’s strategy has its effect of persuading them that a is reinstated and therefore acceptable. The following MAF captures the situation right after the spies have observed the funeral, where agent 1 represents Takeda’s clan and agent 2 represents the spies:

figure b

What is clear from the story is that the spies would never have accepted the fake explanation c at face value had they only suspected that the clan was aware of being spied. Instead, they would have easily resorted to d by performing a sceptic update. The only difference between success and failure lies in the initial epistemic state of the agents involved, as the modelling in Fig. 7 shows. In the first scenario (captured in model \(M_2\)) 2 believes that 1 believes that her goal is already achieved (2 is only aware of a at \(w_2\)), i.e. \(M_2,w_0\vDash \square _2 \square _1 \textsf {stracc}_2(a)\), while in the second case (captured in model \(M_2'\)) 2 believes that 1 believes that it is not (2 is aware of a and b at \(w_2\)), i.e., \(M_2',w_0\nvDash \square _2 \square _1 \textsf {stracc}_2(a)\).Footnote 31

What is also clear is that the different attitude displayed by the spies in the alternative situations does not depend on the source—which is the same—nor on the subject matter—again, the same. It is fair to say that the trust they put in the information received is part of one and the same conditional plan for updating information. They sceptically process the information received if they believe that the clan believes that its goal is not achieved yet but will be after communicating c; otherwise they uncritically accept it. This condition can be generally defined as

$$\begin{aligned} \textsf {CSU}_{i,j}^{x}:= \square _{j} \square _{i}(\lnot \textsf {goal}_i \wedge [\textsf {Pub}^{x},\bigtriangleup ] \textsf {goal}_i), \end{aligned}$$

where i is the speaker, \(\textsf {goal}_i \in {\mathcal {L}}({\mathcal {V}}^{A}_{\textsf {Ag}},\square )\) is its goal, j is the hearer, and x is the communicated argument. The clan, on its side, is well aware of this: they know they can get away with a fake only because, given the circumstances, epistemic vigilance is defused.

It is possible to reason about the effects of conditional plans like the above in our language by defining more complex modalities like the following one that captures the effects of this kind of strategic update:

$$\begin{aligned} {[}\textsf {Str}_j^{x}] \varphi := (\lnot \textsf {CSU}_{i,j}^{x} \rightarrow [\textsf {Pub}^{x},\bigtriangleup ] \varphi ) \wedge (\textsf {CSU}_{i,j}^{x} \rightarrow [\textsf {Scp}_{j}^{x}] \varphi ), \end{aligned}$$

where i is the speaker, j is the hearer, and x is the communicated argument. Note that \(M_2,w_0\vDash [\textsf {Str}_2^{a}]\textsf {stracc}_2(a)\) but \(M_2',w_0\vDash \lnot [\textsf {Str}_2^{a}]\textsf {stracc}_2(a)\). This kind of operations has received attention in semantically oriented belief revision (see e.g. Rodenhäuser 2014, §2.6.1 and the definition of mixed doxastic attitude). A throughout analysis of the subtleties of strategic communication seems to require powerful tools of analysis akin to those currently developed in the area of epistemic planning (Andersen et al. 2012). This investigation goes beyond the scope of our paper and we leave it for future research.

Fig. 7
figure 7

Kagemusha, scenarios \(M_2\) and \(M_2'\). See Example 4 for an explanation of the graphical representation of \({\mathcal {A}}o{\mathcal {A}}\)-models

8 Relation to other formalisms

Recently, uncertainty about AFs has been modelled through quantitative methods (Li et al. 2011) and qualitative ones within the formal argumentation community. Among the qualitative approaches the use of incomplete argumentation frameworks (Coste-Marquis et al. 2007; Baumeister et al. 2018a, b) and control argumentation frameworks (Dimopoulos et al. 2018) has been prominent. Also, opponent modelling in strategic argumentation (Oren and Norman 2009) has been endowed with higher order uncertainty about adversaries (Rienstra et al. 2013). Our logic can be naturally connected to these three lines of research.

8.1 Incomplete AFs

General models of incompleteness in abstract argumentation (Baumeister et al. 2018b) capture uncertainty by extending standard AFs with uncertain arguments \(A^{?}\) and uncertain attacks \(R^{?}\). Their formal definition is as follows.

Definition 18

(Incomplete AF and completions Baumeister et al. 2018b) An incomplete AF is a tuple \(\textsf {IAF}=(A,A^{?},R, R^{?})\) s.t. \(R,R^{?}\subseteq (A\cup A^{?})\times (A\cup A^{?})\), \(A\cap A^{?}=\emptyset \) and \(R\cap R^{?}=\emptyset \). \((A,R)\) is called the definite part of \(\textsf {IAF}\) while \((A^{?},R^{?})\) is called the uncertain part of \(\textsf {IAF}\).

A completion of \(\textsf {IAF}\) is any pair \((A^{*},R^{*})\) s.t.:

  • \(A\subseteq A^{*} \subseteq (A\cup A^{?})\); and

  • \(R_{\mid A^{*}}\subseteq R^{*} \subseteq (R\cup R^{?})_{\mid A^{*}}\) where \(R_{\mid A^{*}}:= R\cap (A^{*}\times A^{*})\).

Completions can be seen as possible ways of removing uncertainty by making some arguments and attacks definite. Here, the constraint on \(R^{*}\) entails that definite attacks between a and b must be present in all completions where both a and b are present.

Classic computational problems for AFs, such as sceptical or credulous acceptance are easily generalized to incomplete AFs. As an example, consider two generalizations of the classic preferred reasoning tasks as given in Baumeister et al. (2018a):

 

\(\textsf {Pr}\)-Possible–Sceptical–Acceptance (\(\textsf {Pr}\)-PSA)

Given:

An incomplete argumentation framework

 

\(\textsf {IAF}=(A,A^{?}\!,R,R^{?})\) and an argument \(a\in A\)

Question:

Is it true that there is a completion \({\textsf {F}}^{*}=(A^{*},R^{*})\)

 

of \((A,A^{?}\!,R,R^{?})\) s.t. for all \(E\in \textsf {Pr}({\textsf {F}}^{*}),a \in E\)?

 

\(\textsf {Pr}\)-Necessary-Credulous-Acceptance (\(\textsf {Pr}\)-NCA)

Given:

An incomplete argumentation framework

 

\(\textsf {IAF}=(A,A^{?}\!,R,R^{?})\) and an argument \(a\in A\).

Question:

Is it true that for each completion \({\textsf {F}}^{*}=(A^{*},R^{*})\)

 

of \((A,A^{?}\!,R,R^{?})\), there is a \(E\in \textsf {Pr}({\textsf {F}}^{*}):a \in E\)?

The \(\textsf {Pr}\)-Necessary–Sceptical–Acceptance and \(\textsf {Pr}\)-Possible–Credulous–Acceptance problems are obtained by changing quantifiers of the definitions above in an obvious way. Similarly, \(\textsf {Pr}\) can be replaced by any other solution concept. It is not difficult to show that the set of completions of an \(\textsf {IAF}\) is a single-agent \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model in disguise, where \(A\cup A^{?}\) is the underlying pool of arguments. This has the effect that the above computational problems can be regarded as model-checking problems in our framework. Let us make this claim more precise.

8.1.1 From incomplete AFs to \({{\mathcal {E}}}{{\mathcal {A}}}\)-models

Given an incomplete argumentation framework \(\textsf {IAF}=(A,A^{?},R, R^{?})\), we can build a single-agent \({{\mathcal {E}}}{{\mathcal {A}}}\)-model to reason about \(\textsf {IAF}\) using our object language. First, we fix some enumeration of \(\wp (A\cup A^{?})=\{E_1,\ldots ,E_n\}\). Then, we define the set of propositional variables associated to \(\textsf {IAF}\) as \({\mathcal {V}}^{\textsf {IAF}}={\mathcal {V}}^{A\cup A^{?}}_{\{1\}}\). Since we have only one agent, we remove subindices from awareness and epistemic operators. We can then provide the following definition:

Definition 19

Let \(\textsf {IAF}=(A,A^{?}\!,R,R^{?})\) be given, the \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model associated to \(\textsf {IAF}\) (for the enumeration of \(\wp (A\cup A^{?})=\{E_1,\ldots ,E_n\}\)) is the model

$$\begin{aligned} M^{\textsf {IAF}}:=(W^{\textsf {IAF}},{\mathcal {R}}^{\textsf {IAF}},V^{\textsf {IAF}}) \end{aligned}$$

where

  • \(W^{\textsf {IAF}}:=\{w^{(A^{*},R^{*})}\mid (A^{*},R^{*}) \text { is a completion of } \textsf {IAF}\}\);Footnote 32

  • \({\mathcal {R}}^{\textsf {IAF}}:=W^{\textsf {IAF}}\times W^{\textsf {IAF}}\);

  • \(V^{\textsf {IAF}}\) is defined for each kind of variables as follows:

    \(V^{\textsf {IAF}}(\textsf {aw}(x))=\{w^{(A^{*},R^{*})}\mid x \in A^{*}\}\),

    \(V^{\textsf {IAF}}(x\leadsto y)=\{w^{(A^{*},R^{*})}\mid (x,y) \in R^{*}\}\),

    for every k such that \(1\le k \le n\): \(V^{\textsf {IAF}}(x {\upepsilon }E_k)=W\) if \(x \in E_k\), and

    for every k such that \(1\le k \le n\): \(V^{\textsf {IAF}}(x {\upepsilon }E_k)=\emptyset \) if \(x \notin E_k\).

The above reduction allows to obtain the following result:

Proposition 2

Let \(\textsf {IAF}=(A,A^{?}\!,R,R^{?})\), \(\wp (A\cup A^{?})=\{E_1,\ldots ,E_n\}\), \(M^{\textsf {IAF}}\) be the \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model associated to \(\textsf {IAF}\) (for the enumeration of \(\wp (A\cup A^{?})=\{E_1,\ldots ,E_n\})\), and let \(w\in M^{\textsf {IAF}}[W]\). We have that:

  • The answer to \(\textsf {Pr}\)-PSA with input \(\textsf {IAF}\) and \(a\in A\) is yes iff \(M^{\textsf {IAF}},w\vDash \lozenge \textsf {stracc}(a)\).Footnote 33

  • The answer to \(\textsf {Pr}\)-NCA with input \(\textsf {IAF}\) and \(a\in A\) is yes iff \(M^{\textsf {IAF}},w\vDash \square \bigvee _{1 \le k \le n}(\textsf {preferred}(E_k)\wedge a {\upepsilon }E_k)\).

Proof

See “Appendix A5”. \(\square \)

In other words, the main reasoning problems about incomplete AFs can be reduced to model-checking problems in our framework.Footnote 34

8.1.2 From \({{\mathcal {E}}}{{\mathcal {A}}}\)-models to incomplete AFs

In the opposite direction, we can easily transform members of a specific class of \({{\mathcal {E}}}{{\mathcal {A}}}\)-models into incomplete AFs, with a sound and systematic way to associate states to completions. This is provided by the following definition.

Definition 20

Let \(M=(W,{\mathcal {R}},V)\) be a total \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model, that is an \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model s.t. \({\mathcal {R}}=W\times W\), for \({\mathcal {V}}^{C}_{\{1\}}\) where C is any finite, non-empty set of arguments, and such that V represents the enumeration \(\wp (C)=\{E_1,\ldots ,E_n\}\). We define the incomplete argumentation framework associated to M as the tuple

$$\begin{aligned} \textsf {IAF}_{M}=(A_M,A_M^{?},R_M,R_M^{?}) \end{aligned}$$

where

  • \(A_M=\{x \in C\mid V(\textsf {aw}(x))=W\}\);

  • \(A_M^{?}=\{x \in C\mid V(\textsf {aw}(x))\ne W, V(\textsf {aw}(x))\ne \emptyset \}\);

  • \(R_M=\{(x,y)\in C\times C\mid V(\textsf {aw}(x))\cap V(\textsf {aw}(y))\subseteq V(x\leadsto y)\}\); and

  • \(R_M^{?}=\{(x,y)\in C\times C\mid V(\textsf {aw}(x))\cap V(\textsf {aw}(y))\cap V(x\leadsto y)\ne \emptyset \}{\setminus } R_M\).

By definition, we have that \(A_M\cap A_M^{?}= \emptyset \) and \(R_M\cap R_M^{?}=\emptyset \), therefore \(\textsf {IAF}_M\) is an incomplete AF.Footnote 35 Moreover, we can associate a directed graph \((A^{*}_w,R^{*}_w)\) to each state \(w \in M[W]\), where \(A^{*}_w:=A(w)\) and \(R^{*}_w:=R(w)_{\mid A(w)} \). It is almost immediate to check that each \((A^{*}_w,R^{*}_w)\) is a completion of \(\textsf {IAF}_M\).

Given a total \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model \(M=(W,{\mathcal {R}},V)\) and its associated \(\textsf {IAF}_{M}\), we say that the valuation V exhausts the completions of \(\textsf {IAF}_M\) iff for each completion \((A^{*},R^{*})\) of \(\textsf {IAF}_M\), there is a state \(u\in M[W]\) s.t. \((A^{*},R^{*})=(A^{*}_u,R^{*}_u)\).Footnote 36 Under this restriction, we can prove the following correspondence result analogous to Proposition 2.

Proposition 3

Let M be a total \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model for \({\mathcal {V}}^{C}_{\{1\}}\) whose valuation represents the enumeration \(\wp (C)=\{E_1,\ldots ,E_n\}\) and exhausts the completions of \(\textsf {IAF}_M\), and let \(w\in M[W]\), then:

  • \(M,w \vDash \lozenge \textsf {stracc}(a)\) iff the answer to \(\textsf {Pr}\)-PSA with input \(\textsf {IAF}_M\) and \(a\in A_M\) is yes.

  • \(M,w \vDash \square \bigvee _{1 \le k \le n}(\textsf {preferred}(E_k)\wedge a {\upepsilon }E_k)\) iff the answer to \(\textsf {Pr}\)-NCA with input \(\textsf {IAF}_M\) and \(a\in A_M\) is yes.

Proof

See “Appendix A5”. \(\square \)

Remark 6

(AF spaces) Interestingly, if we drop the exhaustive valuation requirement, we obtain a one-to-one association from total \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-models to a more general class of structures, that we call AF spaces and which is worth of interest. An AF space is a pair \((\textsf {IAF},{\mathcal {X}})\) where \({\mathcal {X}}\) is any set of completions of \(\textsf {IAF}\). Incomplete AFs can be seen as a special case of AF spaces (those for which \({\mathcal {X}}\) is maximal w.r.t. set inclusion). The converse, however, does not hold. As an example, consider the set of completions associated to the worlds of the \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-model depicted in Fig. 2, i.e. \(\{(\{b,c,d\},\{(c,b)\}), (\{b,c,d\},\{(b,d)\})\}\). It is easy to show that there is no IAF with such a set of completions. We can obviously redefine the main acceptance problems for AF spaces. As an example, the following is the variant of the \(\textsf {Pr}\)-PSA:

 

\(\textsf {Pr}\)-\({\mathcal {X}}\)-Possible-Sceptical-Acceptance (\(\textsf {Pr}\)-\({\mathcal {X}}\)PSA)

Given:

An AF space \(((A,A^{?}\!,R,R^{?}),{\mathcal {X}})\)

 

and an argument \(a\in A\).

Question:

Is it true that there is a completion \({\textsf {F}}^{*}\in {\mathcal {X}}\)

 

s.t. for all \(E\in \textsf {Pr}({\textsf {F}}^{*}),a \in E\)?

Intuitively, AF spaces drop the assumption that the agent perceives \((A^{?},R^{?})\) as completely uncertain, meaning that all combinations of its elements are possible (as far as they are completions). This can be seen as an inconvenience for some modelling processes, since uncertainty does not need to have that level of homogeneity in many real-life argumentative scenarios.

Interestingly, within the class of total \({\mathcal {S}}5({{\mathcal {E}}}{{\mathcal {A}}})\)-models we can isolate two subclasses corresponding to the specific types of IAFs most discussed in the literature, and this by simply applying some of the restrictions we have axiomatised.

  • If M satisfies PIAw and NIAw, then \(A^{?}_M=\emptyset \). In order words, \(\textsf {IAF}_M\) is an attack-incomplete AF, also called partial AF, firstly studied by Cayrol et al. (2007).

  • If M satisfies AU, then \(R^{?}_M=\emptyset \). In order words, \(\textsf {IAF}_M\) is an argument-incomplete AF introduced by Coste-Marquis et al. (2007).

As it should be now evident, \({{\mathcal {E}}}{{\mathcal {A}}}\)-models are much more general than incomplete AFs. More concretely, there are three types of information that we can model with \({{\mathcal {E}}}{{\mathcal {A}}}\)-models but fall out of the scope of incomplete AFs: nested beliefs, multi-agent information and non-total uncertainty about the elements of \(A^{?}\) and \( R^{?}\). Moreover, the basic modal language can be used to answer the queries of the main reasoning tasks regarding argument acceptability in incomplete AFs.

8.2 Control AFs

Control argumentation frameworks (Dimopoulos et al. 2018) are a more complex kind of structure for representing qualitative uncertainty. They enrich incomplete AFs in two different senses. First, they augment incomplete AFs with an additional uncertain attack relation \(\leftrightarrows \), whose precise meaning will be clarified later on. Second, they include a dynamic component by considering yet another partition of the underlying AF (the control part), which is intuitively assumed to be modifiable by the agent. In this subsection, we provide a natural epistemic multi-agent interpretation of control argumentation frameworks (CAFs) using our logic. The intuitive picture behind this interpretation is that of an agent, \(\textsf {PRO}\) (the proponent), reasoning about how to convince another agent, \(\textsf {OPP}\) (the opponent). Here, the uncertain part of CAFs captures the lack of total knowledge of \(\textsf {PRO}\) about \(\textsf {OPP}\)’s knowledge of the underlying AF. Moreover, the so-called control part of a CAF represents the private knowledge of \(\textsf {PRO}\). We also provide a reduction of the main reasoning tasks regarding CAFs to our logic. Let us start with the main definitions concerning CAFs and their semantics (Dimopoulos et al. 2018).

Definition 21

(Control argumentation framework) A control argumentation framework is a triple \(\textsf {CAF}= (F,C,U)\) where:

  • \(F=(A,R)\) is called the fixed part, where \(R\subseteq (A\times A^{?})\cup (A\times A^{?})\) and \(A\) and \(A^{?}\) are two finite sets of arguments;

  • \(U=(A^{?},(R^{?}\cup \leftrightarrows ))\) is called the uncertain part, where \(R^{?},\leftrightarrows \subseteq (A\times A^{?})\cup (A\times A^{?})\) and \(\leftrightarrows \) is symmetric and irreflexive;

  • \(C=(A_{C},R_{C})\) is called the control part where \(A_{C}\) is yet another finite set of arguments and \(R_{C}\subseteq (A_{C}\times (A\cup A^{?}\cup A_{C})) \cup ((A\cup A^{?}\cup A_{C})\times A_{C})\text {;}\)

  • \(A\), \(A^{?}\), and \(A_{C}\) are pairwise disjoint; and

  • \(R,R^{?},\leftrightarrows \), and \( R_{C}\) are pairwise disjoint.

We sometimes call \(A\cup A_{C}\cup A^{?}\) the domain of \(\textsf {CAF}\) and denote it as \(\varDelta ^{\textsf {CAF}}\). Intuitively, the new components can be thought as follows. \(\leftrightarrows \) is an attack relation s.t. the existence of its elements is known by the agent, but the direction is unknown. So, whenever \((x,y)\in \leftrightarrows \), it intuitively means that the agent knows that there is an attack among x and y but it does not know who attacks who. As for \(C=(A_{C},R_{C})\), it is supposed to be the part of the framework that depends on the actions of the agent. These intuitions are formally specified in the following definitions:

Definition 22

(Completion) A completion of \(\textsf {CAF}=(F,C,U)\) is any AF \((A^{*},R^{*})\) s.t.:

  • \((A\cup A_{C})\subseteq A^{*}\subseteq (A\cup A_{C}\cup A^{?})\);

  • \((R\cup R_{C})_{\mid A^{*}}\subseteq R^{*} \subseteq (R\cup R_{C}\cup R^{?}\cup \leftrightarrows )_{\mid A^{*}}\); and

  • for every xy: \((x,y)\in \leftrightarrows \) and \(x,y\in A^{*}\) implies \((x,y)\in R^{*}\) or \((y,x)\in R^{*}\).

From an epistemic perspective, completions can be understood as possible knowledge bases that \(\textsf {PRO}\) attributes to \(\textsf {OPP}\). Note that control arguments \(A_{C}\) are always a subset of every completion. Something similar happens with control attacks (conditionally on the domain \(A^{*}\) of each completion). The intuition here is that \((F,C,U)\) provides the picture of a finished debate seen from \(\textsf {PRO}\)’s point of view, where she has communicated all her available arguments \(A_{C}\). The spectrum of debate states that are between the initial one (where nothing has been said) and \((F,C,U)\) are captured by the notion of control configuration:

Definition 23

(Control configuration) Given \(\textsf {CAF}=(F,C,U)\), a control configuration is a subset of control arguments \(\textsf {CFG}\subseteq A_{C}\) and its associated CAF is \(\textsf {CAF}_{\textsf {CFG}}:=(F,C_{\textsf {CFG}},U)\) where \(C_{\textsf {CFG}}:=(\textsf {CFG},R_{C}\mid _{A\cup A^{?}\cup A_{\textsf {CFG}}})\).

One more time, classical reasoning tasks regarding AFs can be naturally generalised to CAFs. As an example, consider the following one (Dimopoulos et al. 2018):

 

\(\textsf {Pr}\)-Necessary-Sceptical-Controllability (\(\textsf {Pr}\)-NSCon)

Given:

A control argumentation framework

 

\(\textsf {CAF}=(F,C,U)\) and an argument \(a\in A\).

Question:

Is it true that there is a configuration \(\textsf {CFG}\subseteq A_{C}\)

 

s.t. for every completion \({\textsf {F}}^{*}=(A^{*},R^{*})\) of \(\textsf {CAF}_{\textsf {CFG}}\)

 

and for all \(E\in \textsf {Pr}({\textsf {F}}^{*}),a \in E\)?

We now show how to build a two-agent \({{\mathcal {E}}}{{\mathcal {A}}}\)-model to reason about a given CAF. First, let \(\textsf {CAF}=(F,C,U)\), we define the set of variables of \(\textsf {CAF}\) as \({\mathcal {V}}^{\textsf {CAF}}:={\mathcal {V}}^{A\cup A_{C}\cup A^{?}}_{\{\textsf {PRO},\textsf {OPP}\}}\).

Definition 24

(Associated model) Let \(\textsf {CAF}=(F,C,U)\), let \(\wp (A\cup A_{C}\cup A^{?})=\{E_1,\ldots ,E_n\}\), we define the \({{\mathcal {E}}}{{\mathcal {A}}}\)-model associated to \(\textsf {CAF}\) as \(M^{\textsf {CAF}}:=(W^{\textsf {CAF}},{\mathcal {R}}^{\textsf {CAF}},V^{\textsf {CAF}})\) where:

  • \(W^{\textsf {CAF}}:=\{w^{(A^{*},R^{*})}\mid (A^{*},R^{*}) \text { is a completion of }\textsf {CAF}_{\emptyset }\}\).

  • \({\mathcal {R}}^{\textsf {CAF}}_{\textsf {PRO}}:=W^{\textsf {IAF}}\times W^{\textsf {IAF}}\) and \({\mathcal {R}}^{\textsf {CAF}}_{\textsf {OPP}}:=\emptyset \).Footnote 37

  • \(V^{\textsf {CAF}}\) is defined for each kind of variables as follows:

    \(V^{\textsf {CAF}}(\textsf {aw}_{\textsf {PRO}}(x))=W^{\textsf {CAF}}\);

    \(V^{\textsf {CAF}}(\textsf {aw}_{\textsf {OPP}}(x))=\{w^{(A^{*},R^{*})}\mid x \in A^{*}\}\);

    \(V^{\textsf {CAF}}(x\leadsto y)=\{w^{(A^{*},R^{*})}\mid (x,y) \in R^{*} \quad \text {or} \quad (x,y)\in R_{C}\}\);

    for every k such that \(1\le k \le n\): \(V^{\textsf {IAF}}(x {\upepsilon }E_k)=W\) if \(x \in E_k\); and

    for every k such that \(1\le k \le n\): \(V^{\textsf {IAF}}(x {\upepsilon }E_k)=\emptyset \) if \(x \notin E_k\).

    Moreover, for any \(\textsf {CFG}\subseteq A_{C}\) we define \(M^{\textsf {CFG}}:=M^{\textsf {CAF}}\otimes \textsf {Pub}^{\textsf {CFG}}\).

Remark 7

Note that the set of completions of \(\textsf {CAF}_{\emptyset }\) is equal to \(\{(A^{*}_w,R^{*}_w)\mid w \in M^{\textsf {CAF}}[W] \}\) where \(A_w^{*}:=A^{M}_{\textsf {OPP}}(w)\) and \(R_w^{*}:=R^{M}(w)_{\mid A_{w}^{*}}\).Footnote 38 Moreover, for any \(\textsf {CFG}\subseteq A_{C}\), it can be shown that the set of completions of \(\textsf {CAF}_{\textsf {CFG}}\) is equal to \(\{(A^{*}_w,R^{*}_w)\mid w \in M^{\textsf {CFG}}[W] \}\).

The following proposition digs into this multi-agent epistemic interpretation of CAFs:

Proposition 4

Let \(\textsf {CAF}=(F,C,U)\) be a CAF, let \(M^{\textsf {CAF}}\) be its associated model, and let \(w \in M^{\textsf {CAF}}[W]\). We have that:

  • \(A=\{x\in \varDelta ^{\textsf {CAF}}\mid M^{\textsf {CAF}},w\vDash \square _{\textsf {PRO}}\textsf {aw}_{\textsf {OPP}}(x)\}\), i.e. the set of fixed arguments is the set of arguments that the proponent knows that the opponent is aware of.

  • \(A^{?}=\{x\in \varDelta ^{\textsf {CAF}}\mid M^{\textsf {CAF}},w\vDash \lozenge _{\textsf {PRO}}\textsf {aw}_{\textsf {OPP}}(x)\wedge \lozenge _{\textsf {PRO}} \lnot \textsf {aw}_{\textsf {OPP}}(x) \}\), i.e. uncertain arguments are those that \(\textsf {PRO}\) considers possible both that \(\textsf {OPP}\) is aware of them and that \(\textsf {OPP}\) is not.

  • \(A_{C}=\{x\in \varDelta ^{\textsf {CAF}} \mid M^{\textsf {CAF}},w\vDash \square _{\textsf {PRO}} \lnot \textsf {aw}_{\textsf {OPP}}(x) \}\), i.e. control arguments are the arguments that \(\textsf {PRO}\) knows that \(\textsf {OPP}\) is not aware of.

  • \(R=\{(x,y) \in \varDelta ^{\textsf {CAF}}\times \varDelta ^{\textsf {CAF}} \mid M^{\textsf {CAF}},w\vDash \square _{\textsf {PRO}}((\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y))\rightarrow x\leadsto y)\wedge \lozenge _{\textsf {PRO}}(\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y))\}\), i.e. fixed attacks are those that the proponent knows that the opponent is aware of (conditionally on the awareness of the involved arguments). Moreover, the second condition serves to distinguish \(R\) from \(R_{C}\).

  • \(\leftrightarrows =\Big \{(x,y)\in \varDelta ^{\textsf {CAF}}\times \varDelta ^{\textsf {CAF}} \mid M^{\textsf {CAF}},w\vDash \varphi _1 \wedge \varphi _2 \wedge \varphi _3\Big \}\) where:

    \(\varphi _1=\square _{\textsf {PRO}}\Big ( (\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y))\rightarrow (x \leadsto y \vee y \leadsto x)\Big )\),

    \(\varphi _2= \lozenge _{\textsf {PRO}}(\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}} (y) \wedge \lnot x \leadsto y )\), and

    \(\varphi _3=\lozenge _{\textsf {PRO}}(\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}} (y) \wedge \lnot y \leadsto x ) \). So if \((x,y)\in \leftrightarrows \), then \(\textsf {PRO}\) knows (conditionally on \(\textsf {OPP}\)’s awareness of x and y) that either x attacks y or vice-versa. Moreover, the meaning of \(\leftrightarrows \) (provided by Definition 22) forces \(\textsf {PRO}\) to consider as epistemically possible situations where \(\textsf {OPP}\) is aware of both arguments but where (xy) (resp. (yx)) does not hold.

  • \(R^{?}=\Big \{(x,y)\in \varDelta ^{\textsf {CAF}}\times \varDelta ^{\textsf {CAF}} \mid M^{\textsf {CAF}},w\vDash \varphi _1 \wedge \varphi _2 \wedge (\varphi _3 \vee \varphi _4) \Big \}\) where:

    \( \varphi _1= \lozenge _{\textsf {PRO}}(\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y) \wedge x \leadsto y) \),

    \(\varphi _2=\lozenge _{\textsf {PRO}}(\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y) \wedge \lnot x \leadsto y) \),

    \( \varphi _3= \lozenge _{\textsf {PRO}}(\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y) \wedge \lnot x \leadsto y \wedge \lnot y \leadsto x)\), and

    \(\varphi _4=\square _{\textsf {PRO}}((\textsf {aw}_{\textsf {OPP}}(x)\wedge \textsf {aw}_{\textsf {OPP}}(y))\rightarrow y\leadsto x) \). In words, uncertain attacks are those that (i) \(\textsf {PRO}\) considers possible that \(\textsf {OPP}\) is aware of them and that \(\textsf {OPP}\) is unaware (first two conjuncts), and (ii) they are not members of \(\leftrightarrows \) (third conjunct).

  • \(R_{C}=\{(x,y) \in \varDelta ^{\textsf {CAF}}\times \varDelta ^{\textsf {CAF}} \mid M^{\textsf {CAF}},w\vDash \square _{\textsf {PRO}} (x\leadsto y)\wedge (\square _{\textsf {PRO}} \lnot \textsf {aw}_{\textsf {OPP}}(x)\vee \square _{\textsf {PRO}}\lnot \textsf {aw}_{\textsf {OPP}} (y)) \}\), i.e. control attacks are those such that: (i) they are private for \(\textsf {PRO}\) (meaning that it knows that \(\textsf {OPP}\) is a unaware of some of the involved arguments), and (ii) \(\textsf {PRO}\) is sure that they hold.

Finally, we can reduce controllability of a given CAF to a model-checking problem in the associated \({{\mathcal {E}}}{{\mathcal {A}}}\)-model. For doing so, we will use the following shorthand, informally expressing that \(E_k\) is part of \(\textsf {PRO}\)’s private knowledge (i.e. that \(E_k\) is a set of control arguments in the associated \({{\mathcal {E}}}{{\mathcal {A}}}\)-model).

$$\begin{aligned} \textsf {private}_{\textsf {PRO}}(E_k):=\bigwedge _{x \in \varDelta ^{\textsf {CAF}}}\Big (x{\upepsilon }E_k \rightarrow \square _{\textsf {PRO}} \lnot \textsf {aw}_{\textsf {OPP}}(x)\Big ). \end{aligned}$$

Proposition 5

Let \(\textsf {CAF}=(F,C,U)\) be a CAF, let \(\wp (A\cup A^{?}\cup A_{C})=\{E_1,\ldots ,E_n\}\), let \(M^{\textsf {CAF}}=(W^{\textsf {CAF}},{\mathcal {R}}^{\textsf {CAF}},V^{\textsf {CAF}})\) be its associated model, and let \(w\in W^{\textsf {CAF}}\). We have that:

  • The answer to \(\textsf {Pr}\)-NSCon with input \(\textsf {CAF}\) and \(a \in A\) is yes iff

    $$\begin{aligned} M^{\textsf {CAF}},w\vDash \bigvee _{1\le l \le n}(\textsf {private}_{\textsf {PRO}}(E_l)\wedge [\textsf {Pub}^{E_l},\bigtriangleup ]\square _{\textsf {PRO}}\textsf {stracc}_{\textsf {OPP}}(a)). \end{aligned}$$

Proof

See “Appendix A5”. \(\square \)

Again, the proposition can be easily adapted to other controllability problems. Besides, the fact that the control part of a CAF is representable through public additions reveals that the very notion of control configuration assumes that the speaker (proponent) is sure about the effects of the communication. More refined forms of communication (like the ones we have studied in Sect. 7) seem to deserve future attention so as to develop variants of CAFs.

8.3 Reasoning about opponent models

Strategic argumentation (Thimm 2014) studies how agents should interact in adversarial dialogues in order to maximize their expected utility. A useful tool in this context is opponent modelling (Oren and Norman 2009; Rienstra et al. 2013), a well-known technique among AI researchers that deals with more general adversarial situations (Carmel and Markovitch 1996a, b). Opponent modelling for abstract argumentation assumes, as we do for MAFs, that there is an underlying UAF \((A,R)\) which contains all arguments relevant to a particular discourse (Oren and Norman 2009; Rienstra et al. 2013; Thimm 2014; Black et al. 2017). Based on this, it provides a model of a proponent in a strategic dialogue. The central notion is that of a belief state of the proponent, which is defined in general as a couple (BE) where \(B\subseteq A\) is the set of arguments the proponent is aware of and \(E\subseteq \wp (A)\) is the set of belief states the agent considers possible for its opponent.Footnote 39 The belief state can be more or less refined: at level 0 of refinement it only includes the arguments the proponent is aware of. At level 1 it also contains her beliefs about her opponent’s awareness, at level 2 it includes her beliefs about her opponent’s beliefs about her own awareness and so on, up to an arbitrary level n of nesting.

In our semantics, any pointed \({{\mathcal {E}}}{{\mathcal {A}}}\)-model (Mw) for two agents contains all information available to define a belief state of any level n of refinement for agent i. To make this more precise we introduce some notation.

  • \(\textsf {View}_i(w):= \bigcap \{A_i(w')\mid w' \in {\mathcal {R}}_i[\{w\}]\}\).

where \({\mathcal {R}}_i[W']:=\{u \in W\mid \exists u' \in W'(u'{\mathcal {R}}_i u)\}\) for any \(W'\subseteq W[M]\). Intuitively, \(\textsf {View}_i(w)\) consists of the arguments that agent i believes (knows) to be aware of at state w. Based on this, we can define the belief states of agent i at state w for an arbitrary level n as follows

  • \(\textsf {BS}_i^{0}(w):=(\textsf {View}_i(w),\emptyset )\).

  • \(\textsf {BS}_i^{n+1}(w):=(\textsf {View}_{i}(w),\{\textsf {BS}_j^{n}(z)\mid z \in {\mathcal {R}}_{i}[\{w\}], j \ne i \})\).

It is interesting to show that the actual definitions of a belief state provided by Oren and Norman (2009) and Rienstra et al. (2013) are a particular case of our definition modulo the restriction to specific classes of pointed \({{\mathcal {E}}}{{\mathcal {A}}}\)-models.Footnote 40

In the case of the simple agent models (Oren and Norman 2009, Definition 5; Rienstra et al. 2013, Definition 8), a belief state of level n has the form \((B^{0},(B^{1},\dots (B^{n},\emptyset )\dots ))\), where each \(B^{i}\) is an awareness set (of the proponent if i is even and of the opponent if i is odd), and where \(B^{i+1} \subseteq B^{i}\). Here, \(B_0\) contains the awareness set of the proponent, \(B_1\) the awareness set the proponent attributes to the opponent, \(B_2\) the awareness set the proponent thinks the opponent attributes to him, and so forth. From our model-theoretic perspective, this tacitly assumes that we are in a \({\mathcal {A}}o{\mathcal {A}}\)-model where each \({\mathcal {R}}_i\) is functional. Indeed, functionality forces each \({\mathcal {R}}_i[\{w\}]\) to consist of a singleton set. This implies that each \(\textsf {BS}_i^{n}(w)\) has a singleton set as its second element E. Moreover, combined positive and negative introspection guarantee that \(\textsf {View}_i(w) = \bigcap \{A_i(w')\mid w' \in {\mathcal {R}}_i[\{w\}]\} = A_i(w)\), as presupposed in simple agent models. Furthermore, GNIAw forces \(B^{i+1} \subseteq B^{i}\) as desired.

In the more general case of uncertain agent models (Rienstra et al. 2013, Definition 10), a belief state (BE) instead consists of an awareness set B for agent i and a set of belief states E of the opponent, each one of the form \((B',E')\) such that \(B' \subseteq B\). Again, the latter condition assumes that GNIAw holds. The fact that B is the awareness set of the actual state tacitly assumes PIAw as before, but functionality does not need to hold any more, and therefore we are in the more general class of (serial) \({\mathcal {A}}o{\mathcal {A}}\)-models.

Yet a more general class of models, extended agent models is defined by Rienstra et al. (2013, Definition 11). Here virtual arguments are added as arguments the agent is not aware of but consider possible other agents are. From our point of view this corresponds to the failure of GNIAw (while PIAw and NIAw still hold).

Applied to this approach to strategic argumentation, our logics and semantics provide a systematic way to reason about the effects of different kinds of argumentative events on the belief states of agents. This can be useful, in turn, to compute the best move for an agent at a given moment of a dialogue. Furthermore, an important part of the work in strategic argumentation using opponent models consists in finding appropriate ways to update belief states. More formally, given a class of belief states \({\mathcal {B}}\) and a universal set of arguments C, the challenge consists in finding functions of the form \(\textsf {upd}{:}\,{\mathcal {B}}\times \wp (C)\rightarrow {\mathcal {B}}\). From this perspective, our Lemma 1 provides sufficient conditions for accomplishing this task given different constraints on \({\mathcal {B}}\).

9 Discussion, open problems and future research

As mentioned in Sect. 3, there are many alternative design choices for multi-agent argumentation frameworks, which are worth discussing. A first choice concerns the finiteness of the argumentative pool, i.e. (a) of p. 8. Indeed, the set \(A\) of potentially available arguments may well be infinite. In principle, this option is viable for a propositional language with a countable set of variables. However, a propositional language allows to encode the standard solution concepts only in the finite case.Footnote 41 As many other works in this field, we restrict ourselves to finite AFs, which is enough for modelling most real-life debates.

A second branching option for design concerns (b), the fact that \(A\) is fixed in advance. One can instead assume that it is evolving through updates, as in Doutre and Mailly (2018, Sect. 1.3). Our option has been shared by Sakama (2012), Doutre et al. (2014), de Saint-Cyr et al. (2016) and Caminada and Sakama (2017), among others. The rationale behind it is that this imposes no limitation for modelling acquisition of new arguments by an agent and other relevant dynamics of information change, at least when the propositional language is rich enough to encode subjective awareness of arguments (Sect. 4).

Another option is not to assume (c), the existence of an objective attack relation \(R\) between members of \(A\). Proposals like Dyrkolbotn and Pedersen (2016), Baumeister et al. (2018b) avoid (c). This goes in hand with the very minimal assumption that agents only share a “pool” of arguments \(A\), but no constraint on how these arguments interact with each other. This amounts to eliminating the \(R\) component of our structures, and may be adequate in contexts where conflicts between arguments cannot be assessed even from a third person perspective. We should however stress that this is just a special case of a MAF, one where \(R = \emptyset \). In line with others—for instance Schwarzentruber et al. (2012)—we decided to build assumption (c) into our design, since our Kripke semantics still allows, in the general case, to model radical disagreement about attacks at the epistemic level. Besides, this assumption is acceptable in many applications and provides a straightforward way to define more complex notions we are after. We note however, that it is possible to perform the same constructions without assuming (c) by a slightly different language and semantics.

Regarding the nature of the subjective awareness of arguments (\(A_i\)) and attacks (\(R_i\)), there are multiple choices to be made, which consist in accepting or rejecting the following constraints:

  1. (d)

    \(A_i\subseteq A\) (agents are only aware of “real” arguments).

  2. (e)

    \(R_i \subseteq A_i \times A_i\) (agents are only aware of attacks among arguments they are aware of).

  3. (f)

    \(R_i \subseteq R\) (sound awareness of attacks).

  4. (g)

    \(R\cap (A_i \times A_i) \subseteq R_i\) (complete awareness of attacks).

  5. (h)

    \(A\subseteq A_i\) (agents are aware of all “real” arguments).

Recall that our choice for design (Definition 2) integrates (d), (e), (f), and (g), but all of them are open to discussion. Although strongly intuitive, (d) and (e) are questioned by Schwarzentruber et al. (2012), which defines a logic for reasoning about “non-existent” or “virtual” arguments \(\{?_0,?_1,\ldots \}\). We do not integrate constraint (h) as it discards the natural intuition that different agents are aware of different sets of arguments.Footnote 42 Under this assumption the agents’ view can only differ with respect to the attack relations, as in Dyrkolbotn and Pedersen (2016), Cayrol et al. (2007). Again, this condition isolates a specific subclass of our MAFs, those for which, \(A_i = A\), which can be captured axiomatically by imposing all awareness atoms as axioms. Assuming both (f) and (g), i.e. \(\textsf {SCAA}\), is common in the literature on multi-agent abstract argumentation (Caminada 2006; Sakama 2012; Schwarzentruber et al. 2012; Doutre et al. 2017; Rahwan and Larson 2009). However, \(\textsf {SCAA}\) may seem too idealized in many contexts, since it brings the notion of awareness of arguments closer to the one of knowledge of arguments.Footnote 43 Agents may indeed have different abilities to spot conflicts between argumentsFootnote 44 or, even more, they may be entitled to radically different views about the nature of the attacks.Footnote 45 Here again, the just mentioned differences in awareness of attacks can still be modelled in our Kripke semantics. Indeed, what matters here is the distinction between simple \(\textsf {SCAA}\) and common knowledge (belief) of \(\textsf {SCAA}\). The latter is a much stronger assumption and their difference becomes transparent in the language and semantics of epistemic logic (Sect. 5).

The aim of this paper has been to introduce a new DEL framework for reasoning about multi-agent abstract argumentation. This involves the setup of a three-layer logic: propositional, epistemic and dynamic. Our first goal was to encode the key argumentation-theoretic notions in the language of propositional logic, and we showed that this is possible in the finite case. Concerning the epistemic layer, we provided complete axiomatistions for a number of intuitive constraints on awareness of arguments and attacks. Moreover, specific constraints isolate different classes of structures already used in abstract argumentation to model qualitative uncertainty about AFs, and our logic is comprehensive enough to reason about them (Sect. 8.1). As for the third layer, its language and semantics allow modelling subtle forms of information change (Sect. 7) and reasoning about other formalisms for uncertainity and dynamics (Sects. 8.28.3).

Although event models for DEL are apt to describe the effects of complex information updates, its language describes only indirectly the agential component of a debate. More in detail, the language allows to reason about what happens after some combination of communicative act and information update has been performed, but it does not allow to reason about what agents “see to it that” in a debate. This is likely to require additional tools from logics of agency and epistemic planning, which suggest promising venues for future work.