1 Introduction

Designers increasingly need to develop a facility with artificial intelligence, as it becomes part of the way that products and services function and appears in an increasing number of the contexts in which designers work (Benjamin et al. 2021; Dove et al. 2017). However, there are several challenges for design students in engaging with AI, from the broadness of the term AI and the fuzziness with which it is applied (Littman 2021), to the difficulty of getting to grips with the technical and computational complexities of these systems (Yang et al. 2020; Nicenboim et al. 2022). These challenges around understanding and making sense of the new capabilities of AI become urgent as the technology emerges from its latest winter into a new spring, developing at a fast pace (Littman 2021; Samoli et al. 2020; Floridi 2020).

The range of techniques for making creative use of AI has been rapidly growing: runway offered easy access to generative spaces and now video (Runway 2020); EdgeImpulse offers sound and gesture classification for microcontrollers with training through a web interface (EdgeImpulse 2019); the current sets of generative image models such as DALL-E, Midjourney and StableDiffusion and language models (ChatGPT, etc.) allow a natural language interaction through the use of prompts. Along with learning materials for more traditional toolkits (TensorFlow 2015; OpenCV 1999) and model development and exchange initiatives (e.g. HuggingFace 2016), these form a downward pressure on the technical barrier to entry, even as the complexity of the underlying models increases. The conceptual barrier can remain high, though, reducing the possibility for designerly engagement and appropriation. There is a large jump from “my first ML model” to understanding the implications of ML technology, and designers often want—and need—to engage with these implications. Creating models in practice helps, but this needs conceptual framing to help direct and contextualise the activities—for example, in related fields, courses such as Creative Applications of Deep Learning (Mital 2016) and the more provocative follow up Cultural Appropriation with Deep Learning (Mital 2021) look at visual practice, or Machine Learning for Musicians and Artists (Fiebrink 2022) unpack these systems for creative practitioners.

However, simply thinking about why it is hard for designers to appropriate AI technologies into their practices also misses a key question: what can design practice bring to the development and understanding of AI systems (Benjamin et al. 2021), especially as the technologies become more pervasive and more collaborative (Wang et al. 2020). Designers have their own strategies for making use of, critiquing and appropriating new technologies (Westerlund and Wetter-Edman 2017), so there is an interest in understanding what designerly methods could reveal about human–AI relations, particularly where it involves interactions between humans and technological systems—considering the “social, political, ethical, cultural, and environmental factors of implementing AI into daily human-to-computer interactions” (Wong 2018). Design research methods, speculations (Auger 2013; Kirman et al. 2022), fictioning (Forlano and Mathew 2014; Wong et al. 2017; Troiano et al. 2021; Benjamin et al. 2023), probes and toolkits (Sanders and Stappers 2014), more than human design (Coulton and Lindley 2019) and the general practices of Research through Design (RtD) (Giaccardi 2019; Stappers and Giaccardi 2017), are all well suited to thinking into the socio-technical aspects (Holton and Boyd 2021; Sartori and Theodorou 2022; Theodorou and Dignum 2020), possibilities, and implication of AI in everyday life, just as they have been applied to understanding digital sensing technologies (Pierce 2021), blockchains (Murray-Rust et al. 2022a), the future of automation (Cavalcante Siebert et al. 2022) and so on.

Our aim is to help students to design products and services that make use of AI technologies, while developing a critical understanding of its implications. This means articulating both the technical and relational aspects of AI so that meaningfully shape the development of products, services and systems even if they are not intimately familiar with the technical details of its operation. As such we are looking for ways to sensitise interaction designers to AI, to create experiences rather than explanations. In relation to the typology developed by Yang et al. (2020) of ways to aid designers around AI, our work contributes to the early stages of ‘creating AI-specific design processes’ by probing with concrete exercises ways in which educators can support designers in ideating in a space mediated by the capabilities and implications of AI systems.

To explore this space, we created a set of methods for designing AI driven products and services (Sect. 3.2) that draw on theories about how people relate to technology and AI in particular. These methods take the form of short, autonomous, experiential exercises that can be used to develop and enrich the design of interactive technological products and services. We introduced these exercises partway through an interaction design course (Sect. 3.1), where students (n = 100) in small groups (n = 28) are asked to design future products and services through iterative prototyping and testing. We collected an immediate written reaction from each group as to what the students had done with the materials, and the aspects they found useful or resonant. We interviewed a self-selecting subset of the students (n = 12) and their coaches (n = 7) to dig deeper into questions of how the methods had changed their understandings and relations to AI.

To explore the potential of these methods, we investigate the following research questions:

  • RQ1: How do the exercises stimulate and modulate changes to the students’ design process to accommodate AI, in particular the way that they are conceptualising and prototyping their projects?

  • RQ2: How do these experiential exercises affect student’s grasp of AI and ML, in particular in relation to interactional, relational and contextual qualities which are key points in the recent theoretical developments in AI within HCI?

  • RQ3: How do the exercises help to develop a critical design perspective while engaging with AI technology as a socio-technical system?

Through investigating and discussing these research questions, the contributions of this work are:

  1. 1.

    A set of exercises that translate theoretical developments in design and AI, into experiential exercises for designers that can be carried out autonomously, with reflection on the experiential, pragmatic and reflective qualities that made the exercises effective. These exercises are available at [redacted] for future use and development.

  2. 2.

    Insights into how and for what to apply these exercises in a pedagogical context to support design processes for creating AI enabled products and services.

  3. 3.

    Insights about how these exercises affected student’s reasoning and design activities, bringing agency, relationality and criticality alongside development of technical facility.

  4. 4.

    Methodological reflections around the possibilities afforded by the methods and how these contribute to nurturing a uniquely designerly AI culture that supports future design education.

2 Background

Working with AI presents particular challenges for designers. One of them is around engaging with emerging and complex technologies, with different behaviours from traditional design materials. Yang et al. (2020) point at two key challenges: the uncertainty about the capability of AI systems and the complexity of their outputs. The second challenge is around understanding AI, given that the metaphors and imaginaries around it, obscure the real processes that are needed for maintaining such a technology (Murray-Rust et al. 2022b). Contrary to competing terms like ‘complex information processing systems’ or ‘machine intelligence,’ the term AI fires the mind with ideas of human-like reasoning. While these imaginaries seem to be better for marketing, they are certainly no good for developing a grounded sense of the capabilities of AI as a technology Hildebrandt (2020).

2.1 Designing AI

Despite the challenges for designers to engage with AI, there are currently many areas where design and AI touch on each other.

At a low level, there is growing attention to the meeting of AI and user experience (UX), as the new possibilities offered by the technology allow new kinds of interaction, and are susceptible to new pitfalls. Techniques are emerging that help to create user interfaces that work with AI systems (Subramonyam et al. 2021a, b), or support innovating AI-powered services and systems within enterprises (Yildirim et al. 2022). This can be seen in Microsoft’s guidelines for human–AI interaction (Amershi et al. 2019), or Google PAIR’s Guidebook (PAIR 2020), as well as efforts to bring HCI together with AI (Inkpen et al. 2019). Recently, the identification of ‘AI capabilities’ (Yildirim et al. 2023) provides a concrete way to think about design spaces for the interactional aspects of computational system. Negotiation between AI and HCI can be deep and subtle: interactional affordances help to calibrate trust and reliance between humans and AI; conceptual metaphors sculpt the relations formed with conversational agents (Jung et al. 2022; Khadpe et al. 2020); and appropriate abstractions make AI qualities at hand for creative practitioners (Fiebrink 2019; Tremblay et al. 2021). User experience, in its broader sense, goes beyond designing the immediate experiences, with work starting to consider how to develop frameworks for creating more or less personal, dependent and discretionary interfaces (Kliman-Silver et al. 2020), or at how to generate heuristic models of meaningful engagement with AI artworks (Hemment et al. 2022b).

Zooming out slightly, a collection of theoretical issues around AI relate to emerging fields in the third (and fourth) wave HCI and the philosophy of design communities. Scholars in those areas have grappled with how the concepts used in design and HCI practices might be tied to the industrial era, and how they might have to change and adapt to the new kinds of products and materials which are enabled by AI. This includes lines of research such as post-industrial design, more than human practices (Giaccardi and Redström 2020), entanglement thinking (Frauenberger 2020; Murray-Rust et al. 2019; Hodder 2016), fluid assemblages and multi-intentionality (Redström and Wiltse 2018; Wiltse 2020). All these offer vibrant pictures of a new set of relationships between humans and the material world in which both entities ‘co-constitute’ each other. Along with reorienting the relationships between humans and non-humans, scholars within those fields have been rethinking what it is to ‘do design’, breaking with traditions focused on the subject–object dichotomy (Giaccardi and Redström 2020), where design goes beyond a mere problem solving enterprise and becomes an ongoing and more inclusive practice. Although these theoretical developments seem to be gathering momentum, they are still not fully translated into practical tools for designers—the jump from Barad’s agential realism (Barad 2007) to configurations of bits and programmes takes careful work (Scurto et al. 2021; Seymour et al. 2022; Sanches et al. 2022).

At a broader scale, beyond the immediate interactions, some of the theories and practices at play are oriented towards engineering particular system qualities and properties: value-sensitive design can help to make sense of fluid and evolving systems (de Reuver et al. 2020), where many different human values may be at play (Yurrita et al. 2022; Fish and Stark 2021; Shen et al. 2021); questions of meaningful human control modulate the relations of responsibility between humans and automated systems (Cavalcante Siebert et al. 2022), as does responsible AI design (Benjamins et al. 2019). Here, design is an instrumental part of making systems behave in certain ways. AI ethics is a broad field (Hagendorff 2020), and well as directly affecting system properties, work from the Fairness Accountability and Transparency (FAccT) community looks to support documentation that helps maintain these properties in communities, such as documentation for models and datasets (Mitchell et al. 2019; Gebru et al. 2018), and the ethical aspects of system development (Mohammad 2021; Murray-Rust and Tsiakas 2022).

2.2 AI and design education

The specificities and challenges of AI and ML technologies add up to an ongoing discourse at the intersection of design education and technological progress. Traditional formats and scopes for carrying out design are being questioned and revised, with canonical, linear, causal and instrumental approaches being criticised in favour of novel models inspired by complexity theory, system science, and practical philosophy. This moves towards an aim of reconceptualising design as a moral act (Findeli 2001; Lin 2014). Designers and design researchers, in fact, are increasingly recognised as actors whose decisions have ethical as well as political implications (Lloyd 2019). Parallel to this, the societal implications of AI and ML are more clearly pervasive and unpredictable. However, when introduced in design education, these technologies are typically either approached as the ultimate tools to learn or used as ‘context’ for grounding alternative and critical design explorations. On the one hand, ML courses are increasingly offered to design students for promoting ML/AI literacy but remain an addition to the main curricula, rather than being integrated into project courses [as in (Jonsson and Tholander 2022; van der Vlist et al. 2008)]. These technologies are still rarely integrated in design education (Dove et al. 2017), and often approached with the believe that even just exposing students to cutting edge technology can stimulate the emergence of innovative and technologically advanced design solutions to real problems (McCardle 2002). In other cases, however, the approach is diametrically different: AI and ML are conceptualised as phenomena to be understood and questioned, because of its potential impact in society. For instance, Auger (2014) used the theoretical lens of domestication for challenging students to ideate future domestic robots and reflect on their implications in everyday settings. The two perspectives on AI and ML, the focus of which we summarise as technical competency vs critique, tend to remain distinct approaches with apparently opposite scopes. Even when there is an explicit commitment to bridge the two approaches, existing pedagogy struggles to combine the ambition to build AI literacy while also fostering a critical mindset around AI/ML projects, and reflections do not lead to rich critiques about situated and contextual implications of AI and ML unless they are integrated into project development. For some counterexamples, Jonsson and Tholander (2022) purposefully crafted a course for students to approach and appreciate AI tools as creative partners, and learned that AI qualities, such as uncertainty, imperfection, and under-determination, can be a rich source of inspiration for generating creative expressions as well as powerful triggers of reflection. Mital’s Cultural Appropriation with Deep Learning course (Mital 2021) weaves together learning about the operation of deep networks with recognising their role in society. Fiebrink’s work (Fiebrink 2019) distinctively looks at ML as a design material and situates it within project development. Perhaps most similar to the work outlined here is ’Graspable AI’ (Ghajargar et al. 2021, 2022; Ghajargar and Bardzell 2023), which brings together tangibility and AI, using explanation as a path to understanding and form as a language for communicating AI affordances. Even in these cases, however, the emphasis is on one side of the spectrum, that is on how to teach ML effectively to any population and enable the emergence of new creative outputs.

The disciplinary call for exposing the design questions involved in making AI and ML systems—as well as the complexity and trade-offs that implementing these in the world implies (Bilstrup et al. 2022)—remains largely unanswered. Our work sits at the intersections of these experiences and aims to fill the gap between technical efforts and critical explorations. Specifically, we set out to integrate AI and ML explorations within the development of design projects, in a way that both enable students to build AI literacy, as well as to empower them to take a critical stand towards these technologies in society.

2.3 Summary and research direction

Part of the work of design as a discipline is to mediate between these philosophies and actionable practices that can be brought to bear on particular situations. That is the starting point for the work presented in this paper: we are interested in how to bring conceptual developments from design theory and AI into something that is at hand to design students, that can make a difference to how they go about conceptualising and prototyping interactions.

To bridge the gap between the practical and technical engagement with AI, we propose that three levels of engagement between AI and design are all potentially at play within design projects creating AI-powered systems:

Interactional affordances of AI that allow new means of interaction between systems and people. At a low level, AI brings new possibilities for sensing, responding, recognising and classifying from which to build interactions. These interactional affordances and possibilities for action (Stoffregen 2003) offered by machine intelligence can take the form of capabilities offered by the technology (see (Yildirim et al. 2023) for a comprehensive overview), but also of modulations of existing capabilities with AI-specific qualities such as probabilistic outcomes.

AI relationality as it is brought into constellations and forms new relations between people and things. Beyond the immediate interaction, design with AI intervenes conceptually and materially in constellations (Coulton and Lindley 2019) of humans and objects. Designers must navigate the increased agency and depth of interaction that intelligent systems bring, and the changes in the way that we understand and relate to technological systems.

Wider implications of AI as it affects social structures and people’s lives outside the immediate interactions. Concerns about the implications of systems are not new, but AI and data-driven systems that are built through processing large amounts of data about people bring new and subtle ways in which they can be unfair or unjust, more blurring of responsibility and more potential unintended consequences at scale.

For the work at hand, we are interested in how these levels relate to design education, in particular how students start to engage with AI as a design material. To create a broad coverage, we looked at creating methods that could create engagement with the specifics of working with AI systems on these three levels, as well as balancing the educational concerns of developing a better understanding of and facility with the technology, and encouraging critique. Based on this, we created a set of ‘design exercises’—rapid, experiential engagements that draw on the theoretical developments above but can be carried out productively in the context of conceptual development and prototyping interactions with AI systems (Fig. 1).

Fig. 1
figure 1

Situation of the methods underdiscussion across two axes: (i) the level of consideration, from direct interactional affordances, through building relationality out to wider implications and (ii) the balance between developing facility with the technology and critiquing it’s uses

3 Study

3.1 Course and context

The context of this study is a one-semester (20 week) design and prototyping course for first year Masters students in the ‘Design for Interaction’ programme at TU Delft. All students in the course (n = 100) have a design background with a mixed range of computational skills, from no technical knowledge to beginner level in software engineering. The students were grouped by the course coordinators into 28 teams, and coached by seven experienced coaches from the Industrial Design Engineering faculty.

The course is structured in three stages, with student teams of 3 or 4 students working 13 per week on their design project (Fig. 2). They worked on design briefs that asked them to speculate about near future interactions supported by technology. Many of these briefs were provided by client companies, for example new forms of human–vehicle interactions, possibilities for more sustainable cooking through smart kitchen appliances and pervasive computing in hotel rooms. The students had little to no pre-course familiarity with Machine Learning and AI methods, theories and tools; however, most of the coaches had at least some experience with these technologies.

The main learning objective of the course is to introduce students to various ways of prototyping with interactive technology. Students were asked to design within the context of the client company, creating and testing a new iteration of their prototype each week in discussion with their coach. They were prompted to draw on some form of AI or ML, although technical capabilities could be acted out rather than implemented in code. The course ran in three phases: a “First Shot” familiarisation with AI and technology, “Iterating Forward” to develop concepts and “Polishing Up” the final ideas and prototypes (Fig. 2), with an exhibition at the end of each stage. Client companies were invited at the end of each stage to provide feedback to the student teams, organised in the form of an exhibition with interactive prototypes.

Fig. 2
figure 2

Course structure; In the first stage (4 weeks) students were given context on AI and ML, and hands-on engagements with AI technology were provided through a series of workshops with existing tools (Edge Impulse, Teachable Machine and Voiceflow). At the end of the first stage, teams presented multiple ideas demonstrated in multiple early prototypes. The second stage introduced lectures covering AI capabilities, Human–Agent partnerships and the conceptual shifts mentioned above as the students developed their core concept, leading to a second exhibition of interactive prototypes. The third stage introduced the exercises discussed in this paper, as the students refined their projects towards a highly immersive final exhibition with one or more interactive prototypes

3.2 Exercises

The intervention involved a set of 9 exercises (Table 1). Each design exercise was introduced on a single page, containing a title, a short description and instructions on how to execute the exercise, a background section describing the intent, usefulness and ideas behind the exercise and references to papers and related projects (Fig. 3, see supplementary material for full set). The choice of this set of exercises was exploratory: we derived them from a combination of existing design practices, emerging work from the researchers and the theories mentioned above, through extensive discussion between the researchers. We aimed to have a spread of exercises across immediate interactional affordances of AI, mid-level human–machine relations and concerns about the wider implications of AI, as well as across developing fluency and supporting critique (Fig. 1, description in Table 1, details in Supplementary Material). Some of the methods were pre-existing explorations, some had been used extensively and some were adaptations of existing techniques to fit the AI context or the autonomous format. There was a strong focus on activities that could be performed relatively simply by students, that were experiential, and that would work across a range of topics and levels of technical accomplishment. Each exercise was intended for application to an existing project, i.e. not a brainstorming or early ideation tool, but a way to develop existing work.

Fig. 3
figure 3

Example of the exercises, showing (1) title, (2) expected time (3) suitable project types (4) process (5) custom illustration (6) background, (7) references and example projects (full set of exercises in Supplementary Material)

Table 1 Name, key references and description of each method

3.3 Execution and data collection

Towards the start of the third stage, after feedback from the second exhibition (Fig. 2), a half-day workshop was organised in which all student teams were introduced to the 9 design exercises with the aim of refining their project. This timing was chosen primarily for educational reasons—the methods here were designed to help develop and refine existing ideas, rather than generate new ones, so we waited until the final stage of the course. This timing was the subject of some discussion—see Sect. 4.1.2

This half-day workshop allowed each group to execute one or two of the design exercises for their design project. Output of the half-day workshop was captured on A3 templates, including a questionnaire with some first prompts on the usefulness and effectiveness of the exercises applied. The half-day workshop was setup to be executed autonomously by the student teams, selecting the exercises themselves, with lecturers present to observe and assist when necessary. All output materials of the design exercises during the workshops were collected afterwards.

Two weeks after the workshop we invited each team to select a representative to take part in a one-to-one semi-structured interview to discuss their experience and the effect it had on their project. To minimise educational disruption at a busy time and limit the possibility of coercion, we did not attempt to get full coverage, but allowed self selection by the students, in return for a €20 contribution to the teams prototyping budget. This led to 12 out of 28 teams participating in the interview. We interviewed all coaches the week before the end of the course (\(n=7\)) to see what effect they had perceived on the students work. Interview questions and structure can be found in the Supplementary Material.

3.4 Analysis and evaluation

The interviews were audio recorded and both the interviews as well as the output materials from the design exercises were transcribed and analysed by a team of seven researchers. We inductively coded both the written materials and interviews with students and interviews with coaches. We conducted collaborative thematic analysis: the coding team collectively familiarised themselves with the data and defined a shared coding scheme. At least two members of the team coded each of the transcribed materials using this scheme. Finally, coded materials were collectively discussed to synthesise insights into key themes, framed by the three levels of engagement with AI discussed earlier.

4 Findings

Our findings are structured in two parts which build on both the A3 worksheets (\(n=28\)) and the student and coach interviews (\(n=12\), \(n=7\), respectively). While the analysis of the A3 sheets revealed recurring topics and common themes, the interviews revealed in-depth insights about what the students took from the exercises. The first part (Sect. 4.1) covers the execution of the methods: which ones were chosen and how they were perceived and valued by the students. The second part (Sect. 4.2) describes the links made to AI and machine learning at the interactional and relational levels as well as wider implications. In all cases, comments from student interviews are marked as [p] \(\langle\) [id] \(\rangle\) and those from coaches as [C] \(\langle\) [id] \(\rangle\); extra context about the project that the quote relates to is given in square brackets. The students who participated in interviews were working on projects around: comfort and behavioural encouragement while driving, as well as behaviour modelling and matchmaking (for Ford); collection of data while surfing and intelligent ski clothes (for O’Neill); smart objects and energy manifestation in hotel rooms (for Citizen M Hotels); photography for reconstructive surgery (for Erasmus Medical Centre) and speculating on spirituality and life coaching with AI (for the DCODE project).

4.1 Method execution

To build context about the way the exercises were carried out, we give a quantitative summary of students’ opinions of their project before the workshop, and their evaluation of the clarity of the exercises. We then look more qualitatively at two themes: students’ sense of the relevance and overall evaluation of the methods and an analysis of the ways in which they found the methods useful. Table 2 summarises the number of times each method was and provides key quotes for their use in these four areas: concept development, detailing interactions, understanding AI and supporting reflection.

4.1.1 Quantitative self-assessment and clarity of the methods

Analysing the worksheets, we looked into how many times each method was used as well as the perceived clarity of the instructions (Table 2). 20 of 28 groups carried out two exercises, with the remaining 8 carrying out only one. Counting responses to questions about their projects where a value greater than 0 was given (Fig. 4), 16 groups (0.57) felt their project critically investigated technology; 12 (0.43) were solving real-world problems; 15 (0.54) made use of AI qualities; 22 (0.79) engaged with complex relationships and 16 (0.57) intended to consider the wider implications of their work.

All of the methods were rated as clear (Likert scale \(-3=\) "very unclear", \(3=\) "very clear", \(m = 1.5, sd = 1.0, m_{min} = 1.0\)), with only a single instance being rated negatively. This indicates that students felt they understood the purpose and structure of each method.

Fig. 4
figure 4

Student response (per group, n = 28) to questions about their orientation and their project scope. Answers are on Likert scales from ‘Not at all’ (– 3) to ‘It’s the core of our project’ (+ 3)

Table 2 Name, key references and description of each method, along with the number of groups who used it and the average score for clarity

4.1.2 Relevance and situation in the course

Most groups chose exercises to address what they considered unexplored in their projects, or in some cases, even limitations of their concepts. For instance, Metaphor Shifts was picked to “look for something else that better describes [their project]” [p1] or “finding a nice metaphor [to make] interaction with the AI more empathetic to the user” [p15]. Roleplaying AI Networks offered the hope to be “really precise and defined in the personality that we were gonna give the AI [that was deciding people’s futures]” [p27], and Uncertain Interactions was chosen to help “map out all the responses and interactions that we were not considering before” [p19] in a multi-object interaction. Other groups saw the exercises as a more general way to “check if we were in the right direction” [p1], “plan ahead like the possible problems” [p6], “have a discussion point [...] instead of just everybody thinking in different directions” [p15] or more radically “just start again and we go somewhere else” [p6]. Some methods were explicitly avoided as the groups felt they had experienced the methods before, e.g. “role playing” [p19], or because they didn’t fully understand what a method entailed [p16]. Although the students reported that the methods were relatively clear, they suggested that the individual differences might have impacted to some level how students interpreted the exercises: “we’re all from different cultures. So, we all interpreted some questions differently” [p1].

There was a common response that this activity would have been more useful earlier in the course [p2,p5,p16,p19,p27, ...] and it could have helped generating more prototype ideas [p23]. Part of this was due to the sense that the activities felt like “an ideation—like an inspiration activity” [p2]. As students saw the exercises as tools for divergent thinking, they would have liked to use them for ideation in close connection to the prototyping experimentations in the first period of the course [p5]. Others were concerned that the moment when they did the exercises was their time to “optimise the prototype for the exhibition” [p16] and wanted to spend all of their time in making. In contrast, feedback from coaches on the timing was more positive: “To have sort of a zoomed out exercise at that point is I think a very powerful thing to do.[...] if you don’t know where you’re heading, then all these things, I don’t think they will help you. [...] So I wouldn’t move it. ” [CIa]. Several students echoed this perspective emphasising how it helped their process, e.g. “because we were kind of stuck with our idea in general” [p7], that it helped “think of more details” [p20] around a developed idea.

Overall, there was a positive attitude towards the activities, even from groups that were initially suspicious: “we were quite surprised because we were thinking ‘Ohh workshop again. [...] What’s going [to come] out of it?’ [...] And then in the end there was actually some things that really helped us.” [p1]. Some negative responses (4) revealed student’s concerns on carrying out the exercise properly [p5], or spending too long on one interaction [p19]. Some (2), had a hard time finding usefulness in the experience [p15,p16], as they were already familiar with the methods, as “wouldn’t say it brought me a new understanding because the metaphor is something we [already] had” [p15].

4.1.3 Perceived utility of the exercises

Many students saw the exercises as a form of “ideation, like an inspiration activity” [p2],“ kind of a brainstorm” [p7] that can help “to get a better idea” [p7]. Several groups noted that they came up with different and more interesting ideas [p5] and that they “could use this new inspirations” [p23]. The methods were seen as useful for sharpening projects and defining practical next steps, such as planning for when things went wrong, or checklists of common concerns. Metaphor Shifts was particularly generative of new ideas [p5,p16]. Beyond this, the methods were seen to help in the following four areas:

Conceptual development: The methods supported articulation and “helped us get our story right, like the overall purpose of the concepts” [p2], to develop “a better detailed new metaphor” [p5] and to “make [a] choice in what we wanted” [p7]. The benefit of gaining more conceptual clarity was also mentioned by some coaches [CLu]. Uncertain Interactions was seen as useful for mapping out the edges of a concept, so that students could easily get into details and next steps [p6]. The methods also helped grounding ideas, asking whether the concept “can also work on the AI or do we need some future technologies that are not there yet to make it real” [p1]. In some specific instances, the exercise helped to “start thinking about time” [p15], or to “find new ways of taking the same idea and spreading it” [p5]. The exercises helped to get an overview of things that students should think about [p1], making implications concrete and graspabale, in a way that is “so in your face that you don’t even think about the fact that it will be in the future” [p7].

Refining interactions: Many groups came out with a more refined idea about how their conceptual interaction should play out, as the exercise “asks you to go into parts that maybe you don’t want to explore” [p6] and make projects more well rounded. The activities also helped students define interaction contexts better. Groups felt invited to “draw [AI] already in the context” [p15], and to “think about the interaction with some of the objects in the [interactive hotel room] scenario” [p19]. Refinements also pushed them to account for the potential meaningfulness of the projects, to “clarify intentions” [p20] and anticipate outcomes, e.g. “what happens if the user doesn’t understand what [the smart objects are] talking about” [p19]. The experiential nature of the exercises helped to “translate something abstract as "being challenged" or "supporting" [good behaviour while driving] to something actually tangible” [p2], to think into the “aesthetic experience” [p15] of AI where the “metaphor [of ritual cleansing for data collection] helped to think about materials as well” [p15].

Reflection: The workshops were seen as a moment of reflection, a break from the “many layers in such a project” [p25] to focus on particular aspects. This could be on a technical level for the groups who “never really took the time to think about AI” [p2] or more interactional when they “stopped to think about this character sort of thing” [p27]. There was developing a “critical lens, in terms of moral responsibility” [p25] and “seeing how important this is, to acknowledge the mistakes, to be trustworthy” [p6]. Beyond the initial designerly sense of responsibility, they engaged with broader factors contributing to “moral responsibility for an AI system [that encouraged spirituality]” [p25]. Overall, the moment for reflection was seen positively, developing aspects of their work that were not thought through, and a sense that “confidence comes once you [...] manage the critical points” [p6] of the interactions.

Understanding AI: The workshop improved the confidence of students about working with AI, as “ before the course it was just like ‘I don’t know how to use an algorithm to do something cool’ [...] and this makes it kind of [makes] everything just specific in one one workshop” [p2]. This was often not based on a deeper technical understanding of algorithmic operation, but on a thinking about how the AI would relate to things around it. Some groups ended up “actually using more AI because of this [workshop]” [p7], with confidence coming from “now that we know what’s going wrong, and we know how to respond to that” [p6].

Fig. 5
figure 5

Conceptual map of students’ reflections on the benefits of the methods grouped across the three levels of AI engagement: interactional affordances, relationality and wider implications

4.2 Key themes for engaging with AI

Now, we discuss findings in relation to broader theoretical developments in HCI, according to the three levels we have identified earlier: interactional affordances, relational questions and wider implications. An overview of these findings can be seen in Fig. 5

4.2.1 AI interactional affordances

Students found that the workshops illustrated that “there are actually a lot of possibilities with AI” [p7], beyond the tutorials at the start of the course, and that working through the experiences left them with a “whole list of things that [AI] could say or do” [p2]. They already had some experience with particular topics, but this opened up a greater sense of how these possibilities could be deployed in relation to their work. This did not always change the concept of the interaction, but did provide a confidence that many interactional designs could potentially be realised.

Data and meaning: Role playing helped with sensitising the students to the role of data in AI driven systems, questioning “where is the AI getting the information?” [p27] both generally and through very detailed questions of “where we’re gonna put [the camera that understands human–vehicle interactions]” [p5]. The coaches noticed the attention to physical detail as well, seeing development of a “way of bringing the data and looking at it and experiencing it” [CIa]. There were moves to think about how to work with people in wheelchairs, and what it would mean to “recognise these things and build the dataset” [CLu], as well as the broader question of how “how [the collected data] can be meaningful for you as a person” [CIa].

Character and expression: The experiential nature of the exercises was, unsurprisingly, suited to engaging with designerly questions of the character and expression of the autonomous parts of the system. Students notice the possibility that they could be “really precise and defined in the personality that we were gonna give the AI” [p27], questioning default assumptions about how the system might respond. With conversational agents, it was noted that “there is a lot of space between the yes or no” [p19], but also that working probabilistically could smooth out interactional challenges, so that humans “don’t have to become machines ourselves” [p6]. The possibility arose to create pluralistic engagements that gave “different answers based on different characters and based on different situations [for patients undergoing reconstructive surgery]” [p23]. This opened the possibility of making stronger bonds with users, and working on an emotional level, which we will return to in the next session.

Interactional limitations: In general, the coaches were more sensitive than the students to the potential limitations of technology, for example noticing when “the way they acted out looks good on screen but it doesn’t reflect the deeper issues with understanding [...] Whereas if you use a conversational AI model I think you will run into a lot of problems that are hard to act out” [CGi]. It was clear to them that some of the enactments would require sophisticated behaviour that could easily be glossed over with WoZ techniques, and they questioned whether the exercises could also point to these moments of glossing, or help notice points of complexity. For the more technically realised groups, the coaches noticed students working around limitations of the technology, where “ it was not very good at detecting facial expressions, but you made a hand gesture” [CGi] that conveys emotion purposefully, leading to a rethinking of the interaction schema.

4.2.2 AI relationality

Students felt that “[t]here are so many layers in such a project, where you are constantly building” [p25] and noted that the workshops took them into some of the complex, multi-layered aspects of working with interactive AI systems.

Deeper relationships: Following the theme of character above, the workshops prompted students to think about the ways that humans related to the things being designed, giving an impetus to “think more in an empathetic way” [p15] about the end users and what AI mediation would “mean for a human to human relationships” [p25]. Roleplaying the situation with the device helped to look across some of the other people around the interaction, for example working with an system that was helping to take medical photos for reconstructive surgery and seeing “the relationships between the AI [and] doctor, assistant to friends or to your family members” [p23]. This was partly driven by a sense that the AI systems could interact in increasingly human-like ways, with metaphors like “a friend in your car” [p5], or a pet. There was a move to look at some of the longer-term relationships formed and the bonds that people made with AI systems. Students developed increasingly anthropic concerns from whether “ people feel at a loss after they need to give [their smart mirror] back” [p23] at the end of a process, to questions of developing care and love relations to the objects.

Creepiness and agency: Interestingly, some of the more than human metaphors helped students to about when agency was troubling, and were “open to more scenarios that we didn’t see” [p20]. Manifesting home energy use using a metaphor of ‘fireflies’ caused a concern that it “will follow you through your room as a dog follows you. This might be kinda creepy. So what if they [users] don’t want to be followed?” [p20]. The potential intimacy of relations with a vehicle raised concerns about “how intimate your interaction with your decentralised car be” [p5], and how “if you’re driving and you’re stressed and you somehow just get like this random unexpected hug from your car” [p5] it would cause emotional discomfort. Even when autonomous behaviour was not emotionally invasive, there were concerns that “sometimes the [smart hotel room] objects want to speak for themselves but at the same time you don’t want to scare the human that is the guest in this room” [p19].

More-than-human relations: Going beyond metaphors of caring for cars as one might care for a dog, coaches noticed that students would “use design as a medium to amplify the voice of nature” [CIo] or “activate [...] energy consumption in a different way than just a tool” [CLu] in their AI mediated interactions, making a shift to both non-human perspectives and the idea of technological mediation rather than tools for particular outcomes. The students looked into new relationships that might emerge, e.g designing “clothes to learn from every person that wears them to [and] grow its own personality” [p16]. The coaches noticed the roleplaying aspect of the exercises prompted critical reflection into the scenarios and relationships at hand, including noticing “that the setting that they were imagining and the role of the AI within that setting was not a very good fit” [CGi]. Students found the practice useful for articulating what their vision for the future of human–AI relationships at individual and societal levels ought to be, including questions of governance and democracy.

4.2.3 AI and wider implications

Responsibility: While methods targeted at interrogating control (Meaningful Human Control) explored agency and control, other methods (Metaphor Shifts) still gave space for these questions to arise. Students reflected on “considering moral responsibility for an AI system” [p25] within the creation process; and the coaches noticed that the workshops provided “a way to create distance and look at the project from a different perspective” [CIa], to re-evaluate the project beyond the immediate concerns of development, with a sense that it was the designer’s responsibility to make sure that purposes and potential issues were clear upfront. Some students found that the workshops made the idea that people might misuse their system concrete, so for a friendly car system they “gave ourselves some guidance for the next steps, [not] for concepts [...], but more like OK, this is now a checklist that we need to put next through concept every time to make sure we think about this” [p1]. Responsibility often came through thinking through what might go wrong, with evidence of ‘zooming out’ through the exercises, to think about what would happen if these systems were widespread, and their failure modes constantly present for users.

Consent and privacy: Several students mentioned issues around consent; while some felt this was a core part of their existing work, others found that discussion around the workshops was what they needed really understand the implications, and “a solution for something that [is] difficult to think about” [p1]. Groups managed to “dig deeper in that space” [CLu] and better manifest the issues that they were already dealing with, and in some cases this meant that “[consent] was actually a very explicit part of their final concept and that was not at their departure, I think, was driven in part by going a bit more speculative than they were imagining at first” [CGi].

Vision and criticality: A common point from the students was that these workshops helped to think beyond the initial concerns of prototyping and into the multi-layered nature of the projects, not just the around AI responsibility but that “it asks you to go into parts that maybe you don’t want to explore” [p6] and rethink the purpose and shape of the project itself. Coaches were mixed about whether they saw changes in the level of critical thinking around the workshop, with some noticing no change, some a progression, and some seeing a strong difference where critical thought was brought in. Some of these were trade-offs: “They became more critical. They were focussing more on the experience, but I’m not sure they were more engaged with the AI” [CIo]. However, others noticed engaged with the human–AI relationships, questions of datasets and the role of the project as critique, and “really thought about it, how you negotiate with the machine and how much freedom you should have and how much agency you just have” [CMa].

5 Discussion

In this discussion, we address some potentials for developing the exercises, and reflect on our initial research questions. We discuss how the AI exercises address the current methodological gaps and, more broadly, how this work contributes to a larger programme around design, HCI and AI, nurture a distinctively designerly AI culture.

5.1 Effectiveness and future work

The exercises were seen as effective overall, although they could further be improved through use, observation and iteration. They produced thoughtful, socially engaged responses, but to a large extent remained far from the rapid and technically grounded results generated at the beginning of the course, when students were provided with tutorials focused on learning a particular AI technologies. As an example, despite the deep technical grounding of Uncertain Interactions, most student responses did not get deep into the specifics of model output and how to make use of it. Future versions of the exercises could look to bridge this gap, as could their use in more technical contexts, were models were really being trained and deployed. There could also be support to help students to decide which concerns to prioritise—for example, worries about people falling in love with their AI devices might not be the key problematics of the technology as created. While this prioritisation is arguably a part of general design practice, having concrete examples to contextualise the discoveries would be helpful. Practically, most students were relying on pre-built models and ‘Wizard of Oz’ setups (Browne 2019; Dahlbäck et al. 1993) that used human action to simulate complex behaviour. This limited the utility of data-driven exercises (e.g. Poor Datasets) and fed into a focus on the anthropomorphic possibilities of AI. This also led to less engagement with the possibilities of new forms of human–machine interaction than we might have hoped for.

5.1.1 Timing and situation

The time that the students had to execute the exercises was short, which may have limited the potential for deep reflection and thoughtful practice. While the students still had access to the methods, few groups chose to make use of them, so there is space to explore more prolonged engagement. The positioning in the course was somewhat contentious, with many students feeling the methods had been introduced too late (Sect. 4.1.2)—this is coupled to their assumption that the methods were there for concept development and ideation. However, the overall feeling from the coaches was that the timing was sensible—it provided a way to zoom out around existing concepts and add richness. Part of this divergence of opinion is part and parcel of process based education—there are often different views from within the process than outside it. However, it does point to the need for a stronger sense of what one can expect from the methods, and an indication of when and how they could be productively deployed.

5.1.2 Choice and range of methods

This initial set of exercises was based on a particular set of theoretical ideas; it is clear that other theories and concepts could prompt additional methods, and other methods could be derived from the theories used. There is certainly no shortage of candidates, whether agential cuts (Shotter 2014) provide techniques to divide up complex systems and consider multiple boundaries through more or less embodied encounters (Vagg 2022), or ideas of cyborg intentionality (Verbeek 2008) lead us to enact parings with composite possibilities (Rapp 2021), introspection provides a lens to think about relations between AI and lived experience (Brand et al. 2021). Methods with a clear technical genesis would offer immediate experiences that are deeply embedded in and shaped by the technology, for example deliberately misusing vision algorithms (van der Burg et al. 2022) or using computer vision as a site of enquiry (Malsattar et al. 2019). We see this as the start of a collection of ways to engage in this area, which will grow over time. Additional exercises might emphasise different parts of the design process and different modalities of experience as well as introducing new theories or grappling with particular qualities of AI.

5.1.3 Applicability

In terms of subject matter, the exercises were applicable to a range of projects across autonomous cars, robots, Internet of Things, hospitality and so on. They also helped with a range of issues, from shaping overall concepts to detailing important parts of the interaction. The application here was somewhat particular: the middle stages of an exploratory, creative prototyping brief. We would expect that the methods can be used in other processes and different levels of technical fidelity. In fact, several of the methods, such as roleplaying AI networks and Thing Ethnography of AI systems, are likely to give better results as the project is more developed and the context is stronger. Others, such as Poor Datasets are likely to be more useful with a developed technical implementation, while Uncertain Interactions could help with ways to create interfaces around probabilistic models in deployment.

5.2 RQ1: Conceptualising and prototyping practices with AI

The exercises illustrated some of the issues that students have when carrying out prototyping and conceptualisation with AI: the need to deal with uncertainty, the possibilities of more human-like interactions but less clearly defined capabilities, the need to hold multiple levels together. This clearly asks a lot from designers, especially in this case, where many of them did not have strong electronics and coding skills before the course. The experiential (YHemment et al. 2022a) and enacted (Elsden et al. 2017) aspects of the workshops were helpful to navigate this terrain, as the subjects of discussion could be played out in the group, adding to the sense of tangibility and refining how interactions should unfold. The interactional focus of this work makes it distinct from ideation tools such as AIxDesigns ideation cards (AIxDesign 2022) which focus on conceptual innovation or work on developing user experiences (Subramonyam et al. 2021b, a) which makes the interface the primary subject of design. In line with open ended, critical and speculative prototyping methods (Malsattar et al. 2019; van der Burg et al. 2022; Nicenboim et al. 2020) the exercises took the students into the relational and interactional possibilities of AI.

From the feedback, it was important to give students exercises that were concrete enough that they could follow the steps. Several of the exercises were close to relatively standard design practices—Uncertain Interactions drew on the creation of state diagrams as a design articulation tool and the idea of acting out interactions as a form of prototyping is well established (Van Der Helm and Stappers 2020). However, they were adapted to bring AI qualities into the familiar interaction design practices, emphasising aspects like uncertainty, interface capabilities, distributed responsibilities and so on. It is clear that for some of the students, the simple forms of the exercises would have been enough—simply asking ‘what might go wrong’ and drawing a state machine to deal with it, rather than getting into the idea that machine learning systems produce probabilistic outputs produced useful results. None of the students chose to work directly with datasets; this may be a feature of their projects, as there was not much training and learning happening, or simply a lack of attraction to the particular exercise.

There was a tendency with many of the groups to drift into anthropomorphisation, to imagine relations as overly human (Marenko and van Allen 2016), and be diffuse about the capabilities of the technology. This relates to thinking into some of the particular AI characteristics that we will discuss in the next section, it is clear that prototyping will start to take different forms. The evolution of prompt engineering as a discipline (Liu and Chilton 2022) and the potential to generate working systems from prompts (e.g. Aptly aptly 2022) indicates that new forms of prototyping are emerging. Here, the constraints are less well defined than working with code on Arduino, but no less present - training a TeachableMachine (Carney et al. 2020) to detect a gesture has just as many concerns as using the electronic gesture sensor built into the Arduino BLE Sense the students were using, but the failure modes play out differently, and a different set of prototyping practices are brought to bear. The multiple viewpoints contained in the exercises here—people, things, datastreams, algorithms, networks—help to tease out the parts of interactions to prototype. Enacting these possibilities makes it easy to fall into broad, fuzzy, anthropomorphic thinking about what systems might do; the challenge for developing new forms of prototyping is to temper this with a grounding in the capacities of the systems being designed, and to engage with the human-like affordances of technology, without missing the new machinic possibilities. The exercises here helped students to clarify their concepts, move forward with their prototyping, and develop ideas about the responsibilities of creating AI systems, while maintaining designerly concerns of materials, aesthetics, function, fit to context and engagements with multiple actors.

5.3 RQ2: Grasping interactional, relational and contextual qualities of AI

Our analysis of the student responses in relation to the current paradigm shifts in HCI, shows that interactional, relational and contextual qualities of AI could be important elements of design and AI educational programmes. To unpack this, we look at our findings in relation with wider notions of agency, human–machine relations, and understandings of AI.

5.3.1 Agency

As noted above, AI is a tricky term, but a lot is contained in the ideas of agency which it can develop, in particular around ‘non-humanesque agencies’ (Hildebrandt 2020). Some of these agencies were clear prompts for our work: a failure to recognise certain kinds of people as being human (Buolamwini and Gebru 2018) shapes the inter-agencies between vision systems and people; the ability to make decisions rapidly and constantly gives a sense of autonomy, but one which differs both in character and meaningfulness from the one of humans. Several of the exercises were aimed at interrogating these questions: ‘Be the AI’ prompted a reflection on exactly what the machines were doing, the interviewing and roleplaying exercises asked the participants to feel into what the agential possibilities were, and the more conceptual exercises questioned what agencies and responsibilities humans had around the systems. This helped participants to think about “what are the actual choices that we are gonna make or what part of the interaction are we gonna do ourselves and what part is the machine gonna do?” [p16]—a key part of getting past the myths about AI capabilities (Natale and Ballatore 2020).

Much of the thinking reported around issues of agency had to do with “how it’s gonna be alive for people” [p20]—the clear, animate, characterful side of agency. This surely has roots in some of the roleplaying methods. The shape of the collective roleplaying exercises was informed by an increased emphasis on co-performance, as humans were brought in to act out what the smart technologies would do, and notice possibilities for more shared agency, whether co-learning with the AI or finding ways for objects to speak for themselves.

5.3.2 Human–machine relations

Some of the exercises prompted students to position AI in relation to humans and non-humans. Thing Ethnography of AI systems instructed the students to map the ecosystem of the thing and its touchpoints, and reflect on who and what interacts with the concept. In Conversations with AI, the students were asked to enact an AI agent. Metaphor Shifts asked students to design systems based on a particular metaphor and then compare it to others. These exercises highlighted the relations of humans and non-humans within AI systems (Coskun et al. 2022; Nicenboim et al. 2020). They did that by decentering the designers perspective, to consider more actors and interactions that go beyond one user and one device (Verbeek 2020). They also invited students to relate to AI not only as a tool, but as a social agent that shapes people’s lives. In students’ prototypes, intelligence, as well as responsibility, were not seen as a properties of machines alone, but shared between humans and artificial partners. Similarly, uncertainty and unpredictability were ‘collaboratively curated’ to ‘imagine forms of digital interaction’ (Marenko and van Allen 2016).

The findings show that the exercises helped students to expand their concepts to account for the ecologies around them, especially when their projects were centred on particular embodiments, for example extending from a plant pot to a community of plants, and looking at the plant–plant relations as well as the plant–human ones. The exercises also helped them acknowledge other people beyond the immediate users, thinking into how would they relate to the system, and what are the responsibilities that the user, system and designer have towards them. Furthermore, thinking of their concepts in relation to humans and non-humans, created awareness within the students on the human labour that is implicated in sustaining AI systems (Sinders and Ahmad 2021). The kind of metaphorical social relationships the AI had with others ultimately influenced the designs: when the system was cast as a friend, it was seen, designed and conceptualised differently from when it was cast as a pet.

5.3.3 From explanations to understandings of AI

One of the current challenges in the design of AI systems is how to support people in understanding them, especially when used to make autonomous decisions or create knowledge. AI explainability is especially challenging when based on deep learning models, given that some of the paths that AI systems use to give recommendations are not interpretable (Ehsan and Riedl 2020), and the source of many generative outputs is complex (e.g. Kovaleva et al. 2019). While understanding ML in its technical sense is important, recent approaches in the explainability of AI have pointed at other ways of understandings which are not based on technical explanations and instead, promote experimentation, challenging boundaries, or promoting respect (Nicenboim et al. 2022; YHemment et al. 2022a; Seymour et al. 2022). The findings expand the agenda of Explainability of AI by illustrating and unpacking particular design engagements with AI that go beyond mastering ML technical capabilities. This points at particular aspects that are important in the kind of understandings that designers might need to gain of AI. Some particular aspects that helped designers understand AI are exercises that could prompt reflection into the affordances, relations and wider implications that those systems might have. Those engagements were not based on learning how to code ML models, but on experimenting with changing perspectives, provoking failures, enacting behaviours, and drawing schemas. These tactics could become part of a new agenda for supporting designers in understanding AI, especially one that is aligned with theoretical developments in HCI such as the posthuman turn (Lindgren and Holmström 2020) as well as practical developments in design (such as methods used in critical, speculative and adversarial design (DiSalvo 2015; Irani and Silberman 2014; Bozic Yams and Aranda Muñoz 2021)).

5.4 RQ3: Critical design perspectives while engaging with AI as a socio-technical system

While the exercises helped the students to develop their projects (from ideation to conceptualisation and detailing), they especially illuminated and modulated changes in the students design processes in relation to the socio-technical aspects of AI systems (Crawford 2021). The exercises supported students in reflecting on the role of AI within their concepts, in being more specific in what kind of aspects of AI are present, and developing a critical design perspective on AI around values of responsibility and agency.

Designing with AI as a socio-technical system means acknowledging that it is not only a technical domain, but also entangled with social practices, institutions and infrastructures, politics and culture. AI “is both embodied and material, made from natural resources, fuel, human labour, infrastructures, logistics, histories, and classifications” (Crawford 2021). This is not an entirely new perspective—AI has long been considered as a material practice (Agre 1997), but there is a need to consider the interaction between humans and machines as part of broader societal contexts, and the broader discursive settings in which AI is socially constructed as a phenomenon with related hopes and fears (Lindgren and Holmström 2020).

From the findings, it seems the exercises provided a space for students to go beyond the immediate concerns of a rapid prototyping session and engage in reflective practices that can position AI within its broader societal contexts. It is clear from many of the responses that the workshop carried out here provided a moment to reflect. Some of this can be simply ascribed to the sole fact of having the intervention—a space that prompted further thought. However, some of the exercises were more specific triggers for reflective engagement, with Meaningful Human Control and Resisting/Subverting AI asking students to explicitly consider critical perspectives. There was evidence of ‘reflection-in-action’ (Yanow and Tsoukas 2009), moments where in the midst of carrying out experiential exercises around the prototypes they ‘ma[de] previously implicit assumptions about the work explicit’ (Wegener et al. 2019). Where it had previously seemed that an insect swarm would give a warm sense of companionship, looking at the technological sense of surveillance and following revealed a darker possibility for the user; the idea that a supportive hug came from technical rather than human agency was found on reflection to be disturbing.

Some of the exercises prompted a sense of ‘zooming out’ (Nicolini 2009), to consider wider networks of things and people, and this zooming out was part of the students move towards more empathic design. This touched on the temporal (Pschetz and Bastian 2018) aspects of their design, as they thought not just about interactional moments, but the slower unfolding of relationships over time. Overall, there were many moves to develop a sense of criticality within their design: the AI oriented methods created viewpoints for considering the role of technology and its possible overreaches from pragmatic and whimsical perspective, in line with the ‘ongoing practical, critical, and generative acts of engagement’ (Suchman 2020) that build a responsibility for the things being designed.

5.5 Beyond education: nurturing designerly AI cultures

We started with the aim of helping students to grasp enough qualities of AI to adapt their processes and conceptualisations accordingly. The study highlighted that there are distinct ways to engage with AI that are appropriate to our setting, where the culture and practices of designers centre particular ways of working. In this section we discuss the aspects of our work that nurtures this designerly AI culture.

The use of AI technologies varies by field. If we look at AI in terms of the enabling technology and the culture that surrounds it (Caramiaux and Alaoui 2022), some of the differences and parallels with the use of AI in design become clear. There are common moves to cast AI as a creative partner (Llano et al. 2022; McCormack et al. 2020) within music, as a solution for optimisation (Noor 2017) within engineering, as a formalisation and purification of human thought (Chiusi 2020; Singh et al. 2019) in decision making organisations and so on. Within design, the places that AI might sit are being negotiated. Do we bring it into the process as a sparring partner for ideation (Simeone et al. 2022) or a source of creative inspiration (Yun et al. 2022)? Do we use it to re-understand the world through divergent practices (Malsattar et al. 2019)? Is it a new computational capacity for which we have to develop new UX practices (Subramonyam et al. 2021b)? Or a boundary object whose politics need critique (Crawford and Paglen 2019; Lyons 2020). All of these are within the remit of design. What we are interested in accenting here is the possibility for a designerly culture around the use of AI technologies, whether in processes, outcomes or critique. Just as a shift from explanation to shared understanding (Nicenboim et al. 2022) speaks to a relational, experiential mode of engagement, the exercises here create those experiences, and give ways to pick up, tangle and hold those relations. We suggest there are three key features of the methods that support this: experientiality, pragmatism and reflection.

The experiential nature of the methods appeared to be key in bringing in different perspectives on existing work, from noticing potential implications to uncovering new actors and interrogating positive ideas of agency. In this prototyping oriented style of working, enacting and dramatising possibilities helped to grasp concepts. This was particularly relevant to working with AI systems, where the level of agency expected of the technology is high, so vitalising it makes intuitive sense.

Second, the pragmatic nature of the exercises, distilling complex ideas down to a set of steps to explore supported critical discussion. Rather than starting from the theory, students were able to develop grounded experiences and respond to them. This led to practices such as developing their own checklists for responsibility as well as rethinking interactions based on new metaphors for the relations between technologies and humans.

Finally, the exercises all point to building the skills that a reflective designer in AI might need—“Perhaps the thing that they have in common is that they make you reconsider what your intention was and how that intention has manifested itself into the concept” [p16]. As such, they are distinct from technical support, even technical support tailored to creative practitioners (AIxDesignComm 2020), but look to build bridges from more than human thinking (Coskun et al. 2022; Coulton and Lindley 2019; Giaccardi and Redström 2020; Nicenboim et al. 2020) towards technical practice.

By providing this kind of multiple toolbox, we contribute to shaping the emerging AI-Design culture as something distinct from the technical, scientific, artistic and socio-legal cultures that are relatively well established. Further, we believe that this practice of grasping AI can be useful beyond the classroom, a powerful and versatile support for design professionals to meaningfully engage with the development of intelligent systems.

6 Conclusions

There is a growing need for designers to engage with artificial intelligence and machine learning in their practice as it becomes integrated into the functioning of the physical and digital systems that they design. A particular challenge here is how to carry out ideation and early stage prototyping around AI/ML, when the exploratory nature of the work makes it impossible to invest much time in detailed technical understanding of particular algorithms or systems. At the same time, the technical possibilities of emerging algorithms can exert an overly large pull on designs, artificially narrowing the solution space and drawing away from the needs and qualities of the interaction.

To develop the potential for designers to engage meaningfully in this space, working from an educational perspective, this paper introduced a series of ‘AI exercises’ informed by recent theoretical developments in third wave HCI to help students grasp AI as a socio-technical system. We developed three levels of consideration for designing AI systems: interactional affordances, relational possibilities, and the wider social implications of AI systems; and provided methods for working at each level. Through qualitative analysis of these exercises with a group of students, we build up a picture of what kind of impact the interventions had on their understanding of AI and their project development. Through the exercises, the students refined their designs and clarified their concepts, and were able to move forwards with their prototyping with a greater sense of confidence in their designs and responsibility around the process. The experiential, pragmatic aspects of the exercises helped to make theoretical ideas concrete and generative of new possibilities, while keeping a sense of materiality and interaction with humans. The space for reflection provided by the exercises helped the students to develop a wider perspective on their work within the bounds of a rapid prototyping project.

The study findings highlight the ways in which experimental design exercises could support students in understanding AI, especially considering that such understanding needs to go beyond mastering ML technical qualities. The exercises here helped illuminate and modulate changes to the students design processes in relation to the interactional, relational and contextual qualities of AI, helping students develop a reflective and critical design perspective while responding to the key theoretical developments that are discussed in the AI community within HCI. Through the discussion, we raise questions of how a socio-technical view of AI, through ideas of agency and relationality can support a designerly culture around the development of AI.