1 Introduction

The use of artificial intelligence (AI) to generate images received a burst of publicity with the 2021 release of OpenAI’s DALL-E system. Soon, other AI image generators—including Stable Diffusion, Midjourney, and Craiyon—joined it in a rapidly evolving technology landscape. These models, which combine deep learning neural networks and natural language model interfaces, use databases of images and text scraped from the internet to create new images (Kelly 2022). Since 2021, the use of AI image generators has increased rapidly: for example, Stable Diffusion had more than 10 million daily users by September 2022 (Wiggers 2022), and Midjourney had 13 million users by March 2023 (Stanley-Becker and Harwell 2023). As the popularity of such technologies has grown, their proponents have hailed them as making artistic creation cheaper, easier, and more democratic (Sung 2022).

Yet the expanding adoption of AI image generators has also provoked criticism, particularly from artists. When an AI-generated image received first prize in the Colorado State Fair’s 2022 digital arts competition, some observers accused the winning entry’s creator of cheating and argued that such images cannot reflect the human agency or creativity of real art (Roose 2022). In response to the rising use of AI-generated imagery by companies, members of the artistic community have also raised concerns that these technologies will take jobs away from human artists (Deck 2022; Roose 2022). Furthermore, critics have accused AI image generators of stealing human artists’ intellectual property and unique styles by scraping their artwork from the internet and using it in image-generating algorithms without obtaining consent from, crediting, or compensating the original artists (Lnu 2023; Roose 2022; Sung 2022). Building on these arguments, one group of artists has filed a lawsuit against AI image generator companies—as has Getty Images, a leading stock image provider (Vincent 2023).

The controversies surrounding AI image generators have featured prominently in news coverage. For example, major US media outlets such as the New York Times, the Washington Post, CNN, CBS News, and NPR all ran stories about the AI-generated entry that won the Colorado State Fair digital arts competition. Similarly, a range of news organizations covered subsequent developments such as the lawsuits against AI image generator companies and the backlash against an AI tool that generated images in the style of renowned (and recently deceased) Korean manga artist Kim Jung Gi. Debates about these technologies have also played out on social media, with proponents and critics presenting their arguments to audiences on Twitter (recently rebranded as X), Facebook, Reddit, TikTok, and other platforms.

Given that public opinion can influence the adoption, use, and regulation of emerging technologies such as AI (Cave et al. 2019; Zhang and Dafoe 2019), researchers have recently begun to examine public responses to AI image generators. Prior to the explosion of broad popular interest in these technologies, one experimental study found that varying the valence (positive or negative) and agency language (agent or tool) in information about AI image generators influenced the extent to which participants anthropomorphized such generators as well as the degree of credit for AI art they attributed to different actors (Epstein et al. 2020). Studies have also explored how participants evaluate AI art and AI image generators after viewing the former or interacting with the latter (Gu and Li 2022; Hong and Curran 2019; Lima et al. 2021; Mikalonytė and Kneer 2022). Looking at US public opinion more broadly, a national survey conducted by the Pew Research Center in December 2022 found that 20% of respondents saw “using artificial intelligence (AI) to produce visual images from keywords” as a “major advance for the visual arts,” and another 26% saw it as a “minor advance” (Funk et al. 2023). Yet little research to date has examined how exposure to the growing media debate surrounding AI image generators is linked to public attitudes about such technologies.

To shed new light on this topic, the present study examines how patterns in media use—including following technology news and watching science fiction—and exposure to specific social media messages predict opinions about AI image generators. In doing so, the study builds on framing theory (Entman 1993; Scheufele 1999) as applied to communication about science and technology issues (Gamson and Modigliani 1989; Nisbet 2009). In particular, the study’s hypotheses and research questions extend recent theorizing about how media use (Brewer et al. 2022) and exposure to media frames (Bingaman et al. 2021) are related to attitudes about AI. Drawing on data from a 2022 survey of the US public that included an experimental manipulation of exposure to different tweets about the topic, the analyses show that technology news use and science fiction viewing predicted support for AI art but also predicted negative perceptions of AI image generators as stealing jobs and art styles from human artists. The experimental results, in turn, indicate that exposure to tweets framing AI image generators in different ways can shape opinions about these technologies.

2 Framing AI image generators

A frame, as defined by Gamson and Modigliani (1987), is a “storyline” for what an issue “is about” (p. 143). Frames are constructed from metaphors, catchphrases, visual icons, and other contextual cues that can help audience members interpret topics such as emerging technologies (Gamson and Modigliani 1989). Framing, in turn, revolves around the presentation of issues in public discourse and the influence of this presentation on how audience members make sense of the issues in question (Entman 1993). Framing processes take place on four levels: in the minds of elites and communication professionals, including journalists and social media influencers; in communicative texts, such as news stories and social media posts; in the minds of individual audience members; and in the broader culture (Entman 1993). Frames in communication highlight a given interpretation of a topic and deemphasize others through selection of what information to include and exclude, while frames in mind are cognitive schemata that organize information about a topic by giving particular ideas and associations greater salience than others (Bauer and Bogner 2020; Nisbet 2009; Scheufele 1999). By shaping frames in audience members’ minds, exposure to frames in communication can influence opinions about the topic at hand, thereby yielding framing effects (Nelson et al. 1997; Scheufele 1999).

Research on framing of science and technology topics has identified a set of frames that commonly appear in media messages and resonate with broader cultural values (Gamson and Modigliani 1989; Nisbet 2009). One such frame is the social progress frame, which emphasizes how new scientific and technological advancements will benefit humanity and enhance quality of life. A more pessimistic frame is the Pandora’s box frame—also labeled the runaway science or Frankenstein’s monster frame—which casts scientific and technological developments as unleashing negative consequences on society. Studies across a range of issues, from nuclear power to biotechnology to nanotechnology, demonstrate that exposure to such frames can shape audience members’ attitudes and beliefs (Cobb 2005; Druckman and Bolsen 2011; Gamson 1992).

Building on these findings, the present study investigates how use of specific media genres and exposure to specific frames are linked to opinions about AI image generators. It begins by looking at two genres that may play key roles in framing AI: news coverage and science fiction.

3 News use and attitudes toward AI image generators

Driven by a frame-setting process that often prioritizes values such as novelty and drama (Scheufele 1999), news media outlets use framing to present engaging stories about complex topics that may be unfamiliar to their audiences—including science and technology topics (Gamson and Modigliani 1987, 1989). Though news framing varies across such topics, it often draws on common frames such as the social progress frame and the Pandora’s box frame (Nisbet 2009). For example, news stories about technologies such as geoengineering (Corner and Pidgeon 2015), biotechnology (Nisbet and Lewenstein 2002; Priest and Ten Eyck 2003), and nanotechnology (Priest 2005) have highlighted both potential benefits and potential risks of new advancements. Depending on the overall balance of framing in coverage, patterns of news use can predict audience responses to these sorts of emerging technologies (Besley and Shanahan 2005; Brossard and Shanahan 2003; Liu and Priest 2009; Nisbet and Goidel 2007).

Of particular relevance to the context at hand, recent studies have shown that news coverage of AI in general has included both social progress frames emphasizing the technology’s potential to improve lives as well as Pandora’s box frames emphasizing the problems it may unleash (Chuan et al. 2019; Fast and Horvitz 2017; Obozintsev 2018). At the same time, this research indicates that news framing of AI tends to be more optimistic than pessimistic. In terms of social progress framing, coverage has highlighted potential benefits in terms of jobs and quality of life; in terms of Pandora’s box framing, it has highlighted potential job losses and invasion of privacy as well as existential threats to humanity (Brewer et al. 2022; Chuan et al. 2019). In keeping with these patterns of framing in coverage, as well as broader findings that news use can predict audience members’ frames in mind (Scheufele 1999), one recent study demonstrated that among the US public, technology news use was linked to invoking both social progress frames and Pandora’s box frames for AI (Brewer et al. 2022). Consistent with the overall prevalence of social progress framing over Pandora’s box framing in coverage, this study also found that following technology news fostered support for developing AI.

The same frames that appear within news coverage of AI in general can be found within news coverage of AI image generators in particular. On one side of the debate, advocates have framed these technologies in terms of social progress by presenting them as new tools for creating “real art.” For example, an article in Wired magazine described how there is “a real beauty to their creativity, and we stare much in the way we might appreciate a great art show at a museum” (Kelly 2022). Similarly, an AP story described an AI-based art installation at the Museum of Modern Art in New York to illustrate how “there are many people who embrace the new AI tools and the creativity they unleash” (O’Brien and Lajka 2023). On the other side of the issue, critics of AI image generators have used Pandora’s box framing to cast these technologies as “art thieves” that threaten artists’ livelihoods and steal unique art styles. As a case in point, one Washington Post story quoted an artist saying, “Nobody understands that a program taking everyone’s art and then generating concept art is already affecting our jobs” (Hunter 2022). Likewise, an NBC News story quoted an artist who accused AI image generators of “forgery” and “art theft” (Sung 2022), and a New York Times story suggested that “significant advances in generative artificial intelligence mean robots are coming for artists” (Hill 2023).

Given previous findings that members of the public who followed technology news were particularly likely to invoke social progress frames for AI and express support for it while also being particularly likely to invoke Pandora’s box frames for AI (Brewer et al. 2022), the present study tested the following hypothesis:

H1A

Technology news use will predict (a) support for AI art but also (b) negative beliefs about AI image generators taking artists’ jobs and stealing their art styles.

4 Science fiction viewing and attitudes toward AI image generators

Science fiction provides another potential source of audience frames for new technologies (Delgado et al. 2012). A long line of research suggests that portrayals in entertainment media can cultivate attitudes about science (Dudo et al. 2011; Gerbner 1987; Nisbet et al. 2002), and more recent work on genre-specific media effects (Lee and Niederdeppe 2011) suggests that depictions in science fiction films and television programs can shape attitudes toward a range of emerging technologies (Besley and Shanahan 2005; Brewer and Ley 2021; Nisbet and Goidel 2007). Such effects of science fiction could stem in part from stylistic choices creators make to increase the likelihood that audience members will accept what they are watching as plausible and credible (Barnett et al. 2006): portrayals that are perceptually realistic in terms of visuals and dialogue may produce a naturalizing effect that helps to shape audience beliefs (Kirby 2003). Furthermore, science fiction narratives may influence audience members by inducing a sense of psychological transportation or immersion (Green and Brock 2000; Nader et al. 2022).

From the introduction of HAL-9000 in the 1968 film 2001: A Space Odyssey onward, science fiction movies and television programs have offered depictions of both threatening and helpful artificial intelligences (Nader et al. 2022; Obozintsev 2018; Perkowitz 2007). Prominent examples of Pandora’s box portrayals include Skynet from the Terminator franchise, the Machines from the Matrix franchise, and Ultron from the Marvel Cinematic Universe franchise, whereas examples of AIs as instruments of social progress include Data from the Star Trek franchise, the Machine from the television series Person of Interest, and Jarvis from the Marvel Cinematic Universe franchise. Evidence indicates that such portrayals can influence audience members’ attitudes about AI (Nader et al. 2022), but recent findings also suggest that the links between science fiction viewing and opinions about the topic are complex in ways that reflect the genre’s ambivalent depictions (Brewer et al. 2022).

At time of writing, Hollywood has yet to offer notably prominent portrayals of AI image generators. However, science fiction’s broader depictions of threatening and helpful AIs could serve as bases for forming positive opinions about these technologies, negative perceptions of them, or both. With this in mind, the present study asked the following research question:

RQ1

How will science fiction viewing predict (a) support for AI art and (b) negative beliefs about AI image generators taking artists’ jobs and stealing their art styles?

5 Exposure to specific messages and attitudes toward AI image generators

Looking beyond broad patterns in media use, exposure to specific frames in media messages can also shape responses to new technologies (Cobb 2005; Druckman and Bolsoen 2011). In the context at hand, one recent experimental study showed that participants exposed to social progress framing of AI reported greater support for the technology than did those exposed to Pandora’s box framing (Bingaman et al. 2021). Building on this research, the present study hypothesized that seeing AI image generators framed as instruments of progress in art—or, alternatively, as threats to artists—will influence opinions about the topic. On one hand, exposure to framing of such technologies as advances in “real art” should lead to more favorable views:

H2

Compared to people exposed to no frame, those exposed to framing of AI image generators as tools for creating real art will report (a) more support for AI art and (b) less negative beliefs about AI image generators taking artists’ jobs and stealing their art styles.

On the other hand, exposure to framing of AI image generators in terms of artists’ worries or anger at having their work copied should produce the opposite effect:

H3

Compared to people exposed to no frame, those exposed to framing of AI image generators in terms of artists’ concerns or outrage will report (a) less support for AI art and (b) more negative beliefs about AI image generators taking artists’ jobs and stealing their art styles.

Building on findings that media messages often include competing frames (Nelkin and Marden 2004; Wise and Brewer 2010) and that exposure to two-sided framing can neutralize framing effects (Chong and Druckman 2007), the present study also considered how exposure to competing frames within the same message may shape audience members’ opinions about AI image generators:

RQ2

How will exposure to two-sided framing of AI image generators affect (a) support for AI art and (b) negative beliefs about AI image generators taking artists’ jobs and stealing their art styles?

Given that much of the debate between proponents and critics of AI image generators has taken place on social media platforms such as Twitter (Metz 2022; Vallance 2022; Vincent 2022) and that frames embedded within tweets can shape attitudes about science and technology topics (Steede et al. 2020; Vaala et al. 2022), including AI (Vorobeva et al. 2023), the study’s design focused on testing how exposure to tweets may influence opinions about the issue at hand.

6 Methods

The study analyzed original data from a national online survey designed by the authors and conducted from December 8 to December 18, 2022. The sample (N = 1,035) was selected from Qualtrics panels based on US population quotas for gender, age, race, education, income, and region. The survey design, including the experimental manipulation embedded within it, was approved by the Institutional Review Board of the authors’ university.

6.1 Media use measures

All respondents were asked how closely they followed “news about technology” on a four-category scale (0 = not at all; 3 = very; M = 1.70; SD = 0.85). In addition, all respondents were asked how often they watched “science fiction shows” on a four-category scale (0 = less than a few times a month; 3 = nearly every day; M = 1.08; SD = 1.00).

6.2 Experimental treatments

To test the effects of different frames for AI image generators, respondents were randomly assigned to a control group that received no tweet (n = 211) or one of four treatment groups that viewed a screen capture of a tweet about the topic. In selecting stimuli for this between-subjects experimental design, the study emphasized external validity by incorporating tweets from the real-life debate about AI image generators.

Two of these tweets were selected from the top results generated by Incognito mode Google searches in November 2022 for “AI art” and “Twitter” or “AI image” and “Twitter.” The first treatment group (n = 207) received a September 22, 2022 tweet from Playground AI founder Suhail Doshi (@Suhail) that included two AI-generated images along with text framing AI-generated images as real art: “There’s no doubt in my mind that making AI art is real art. I spent about 1.5 h tweaking things to produce these visuals. The hair, perfect red lipstick color, focus, eyes, wrinkles, theme, reflections, clothes. It was a joy to achieve a result I couldn’t have previously.”Footnote 1 The second treatment group (n = 207) received an August 13, 2022 tweet from artist R. J. Palmer (@arvalis) that framed the topic in terms of artists’ concerns. Palmer tweeted four AI-generated images and wrote, “A new AI image generator appears to be capable of making art that looks 100% human made. As an artist I am extremely concerned.”Footnote 2

In the other two treatment conditions, respondents received a tweet from a prominent news organization. The third treatment group (n = 198) received an October 21, 2022 tweet from CNN that included an AI-generated image along with a headline framing the topic in terms of artists’ outrage: “These artists found out their work was used to train AI. Now they’re furious.”Footnote 3 Meanwhile, respondents in the fourth treatment condition (n = 203) received a September 5, 2022 tweet from the New York Times presenting a two-sided debate about AI image generators. This tweet included an image of the AI-generated piece that won a digital art prize in the 2022 Colorado State Fair along with a headline that highlighted competing interpretations of the controversy about the entry: “An artwork made with an artificial intelligence program won a prize at the Colorado State Fair’s art competition—and set off fierce backlash from artists who accused its creator of, essentially, cheating. ‘I won, and I didn’t break any rules,’ he said.”Footnote 4

In the analyses, assignment to experimental conditions was captured by a series of indicator variables for whether respondents received each treatment (0 = no; 1 = yes). The control condition was treated as the baseline for comparison, with supplementary analyses testing for differences across treatment conditions.

6.3 Measures of attitudes toward AI image generators

After receiving the tweet for their condition (or no tweet, for the control condition), respondents were told, “As you may know, artificial intelligence image generators such as DALL-E and Craiyon use algorithms and data sets of existing images from the internet to create new digital images.” They were then asked how much they supported or opposed “the development of artificial intelligence art generators” on a five-category scale (strongly oppose = 0; strongly support = 4). They were also asked how much they agreed or disagreed on five-category scales (strongly disagree = 0; strongly agree = 4) that “images generated by artificial intelligence can be real art,” “images generated by artificial intelligence should be allowed to compete against human art,” “artificial intelligence image generators will steal the art styles of human artists,” and “artificial intelligence image generators will take jobs from human artists.”

A plurality of respondents supported the development of AI image generators (see Table 1): 36%, versus 20% opposed (M = 2.20; SD = 1.05). A plurality also agreed that AI-generated images can be real art, with 45% agreeing versus 24% disagreeing (M = 2.25; SD = 1.15). At the same time, half of the respondents (50%) disagreed that AI-generated images should be allowed to compete against human art whereas only 23% agreed (M = 1.55; SD = 1.25). Furthermore, around half of all respondents agreed that AI image generators will steal the art styles of human artists (47%, versus 21% disagreeing; M = 2.37; SD = 1.14) and that AI image generators will take jobs from human artists (51%, versus 21% disagreeing; M = 2.42; SD = 1.16).

Table 1 Measures of attitudes toward artificial intelligence image generators

For the analyses, responses to the first, second, and third items were averaged to create an index of support for AI art (α = 0.77; M = 2.20; SD = 1.05). Responses to the fourth and fifth items were averaged to create an index of negative beliefs about AI image generators (r = 0.60; M = 2.40; SD = 1.03). The two indices were not significantly correlated with another (r = − 0.05).

6.4 Control variables

Following previous research on predictors of broader attitudes toward AI (Araujo et al. 2020; Brewer et al. 2022; Nader et al. 2022; Selwyn and Gallo Cordoba 2022; Yigitcanlar et al. 2023), the analyses included controls for a number of background variables. Overall television viewing was measured by an item asking respondents, “On the average day, how much time do you spend watching television shows and movies (including viewing on a computer or mobile device)?” Response options ranged from none (coded as 0) to four hours or more (coded as 4; M = 2.77; SD = 1.19). Political ideology was captured through a standard seven-category measure (0 = very liberal, 6 = very conservative; M = 3.34, SD = 1.63), and religiosity was captured through a standard four-category measure for the importance of religion to the respondent’s life (0 = not at all important, 3 = very important; M = 1.90, SD = 1.07). Demographic controls included gender (male = 46%, female = 54%); age (M = 45.89, SD = 17.34); self-identification as Black (13%), Hispanic (18%), and Asian (6%); education (on a six-point scale; M = 2.43, SD = 1.50); and income (on a 12-point scale; M = 5.60, SD = 3.56).

7 Results

The analyses used Ordinary Least Squares regression to test the study’s hypotheses and address its research questions. The models for support for AI art and negative beliefs about AI image generators included the media use variables, the indicator variables for the experimental conditions, and the control variables (see Table 2).

Table 2 Predictors of support for AI art and negative beliefs about AI image generators

Each of the key media use variables was associated with positive attitudes about AI-generated art and negative beliefs about AI image generators. Consistent with H1a and H1b, respectively, following technology news predicted greater support for AI art (b = 0.23; p ≤ 0.01) while also predicting perceptions that AI image generators will take jobs from human artists and steal their art styles (b = 0.14; p ≤ 0.01). Compared to respondents who did not follow technology news at all, those who followed such news very closely scored around two-thirds of a point higher on the support for AI art index (0.69) and around four-tenths of a point higher on the index for negative beliefs about AI image generators (0.42).

In response to RQ1a and RQ1b, science fiction viewing predicted greater support for AI art (b = 0.13; p ≤ 0.01) while also predicting negative beliefs about AI image generators (b = 0.07; p ≤ 0.05). Relative to respondents who seldom or never watched science fiction, those who watched it frequently scored around four-tenths of a point higher on support for AI art (0.39) and around a fifth of a point higher on negative beliefs about AI image generators (0.21).

Turning to the experimental results, the respondents who received the real art tweet were around a quarter of point more supportive of AI art than control respondents (b = 0.23; p ≤ 0.01)—a result consistent with H2a. However, negative beliefs about AI image generators did not differ significantly across these two conditions; thus, the results did not support H2b.

As anticipated by H3a, respondents who received the artists’ concerns tweet were around a quarter of a point less supportive of AI art than control respondents (b = − 0.28; p ≤ 0.01). Providing weaker support for H3a, respondents who received the artists’ outrage tweet were marginally less supportive of AI art (b = − 0.16; p = 0.07) than control respondents. Meanwhile, the results yielded only weak and partial support for H3b. Respondents in artists’ concerns condition were marginally more likely than control respondents to hold negative beliefs about AI image generators (b = 0.18; p = 0.08), whereas those in the artists’ outrage condition did not differ from control respondents on this variable.

In response to RQ2a, respondents who received the two-sided debate tweet did not significantly differ from control respondents in their support for AI art. Nor did exposure to the two-sided debate tweet significantly influence negative beliefs about AI image generators, relative to the control condition (RQ2b).

Supplementary regression models that treated the real art tweet condition as the baseline revealed additional differences across conditions. Specifically, respondents who received this tweet were more supportive of AI art than those who received the artists’ concerns tweet (b = − 0.51; p ≤ 0.01), the artists’ outrage tweet (b = − 0.39; p ≤ 0.01), or the two-sided debate tweet (b = − 0.31; p ≤ 0.01). Furthermore, respondents who received the real art tweet were more likely than those who received the artists’ concerns tweet to hold negative beliefs about AI image generators (b = 0.32; p ≤ 0.01).

In light of findings that prior message exposure (Nelson et al. 1997) and political beliefs (Haider-Markel and Joslyn 2001) can condition framing effects, additional analyses tested whether technology news use, science fiction viewing, or political ideology moderated the effects of the framing treatments on attitudes about AI image generators. No significant interactions emerged for technology news use or ideology; however, the negative effect of the artists’ concerns tweet on support for AI art was significantly stronger among those who watched more science fiction (for the interaction term, b = − 0.26; p ≤ 0.01).

Of the control variables, three significantly predicted support for AI art: overall television viewing (b = − 0.05; p ≤ 0.05) and age (b = − 0.008; p ≤ 0.01) were negatively related to such support, whereas religiosity (b = 0.06; p ≤ 0.05) was positively related to this dependent variable. None of the control variables significantly predicted negative beliefs about AI image generators.

Given previous evidence that media use, including news consumption (Scheufele et al. 2002) and television viewing (Shrum 1999), can predict not only attitude valence but also attitude strength, another set of supplementary analyses tested whether the independent variables in the model predicted strength of attitudes about AI art and strength of beliefs about AI image generators. Measures of attitude strength were created by folding responses to the index items and then averaging these folded scores (see Scheufele et al. 2002). The analyses showed that both technology news use and science fiction viewing predicted stronger attitudes about AI art and stronger beliefs about AI image generators (p ≤ 0.01 for each). Thus, use of media featuring AI-related content appeared to be associated with greater crystallization of and confidence in attitudes about AI art, consistent with research in other domains. In addition, greater education was associated with stronger attitudes in each case (p ≤ 0.05 for both), exposure to the artists’ concerns tweet predicted stronger attitudes about AI art (p ≤ 0.01), and conservative ideology predicted stronger beliefs about AI image generators (p ≤ 0.05).

8 Conclusion

This study sought to advance our understanding of how patterns in media use and exposure to specific media messages predict opinions about AI image generators. Taken together, its findings extend framing-based accounts of public attitudes about AI. In terms of media habits, the results show that technology news use was linked to such attitudes in multiple ways. On one hand, following technology news predicted support for AI art; on the other hand, it also predicted negative beliefs about AI image generators taking artists’ jobs and stealing their art styles. This set of results may reflect the capacity of news framing to foster associations between news use and audience responses to emerging technologies (Besley and Shanahan 2005; Brossard and Shanahan 2003; Liu and Priest 2009; Nisbet and Goidel 2007) in conjunction with the mixed framing of AI within news coverage. News about AI in general features both social progress frames and Pandora’s box frames (Chuan et al. 2019; Fast and Horvitz 2017; Obozintsev 2018), and consumption of such news is associated with holding each frame in mind (Brewer et al. 2022). The present study’s results suggest that the same dynamic could play out in the specific context of AI image generators.

A parallel set of findings emerged for science fiction viewing, which predicted both support for AI art and negative beliefs about AI image generators. This pattern may reflect science fiction’s ambivalent framing of AI along with the genre’s power to shape attitudes about emerging technologies (Besley and Shanahan 2005; Brewer and Ley 2021; Nisbet and Goidel 2007) through perceptually realistic portrayals (Kirby 2003) and immersive narratives (Green and Brock 2000). Over the past few decades, popular films and television programs have featured examples of helpful artificial intelligences and threatening ones (Nader et al. 2022; Obozintsev 2018; Perkowitz 2007). Such portrayals, in turn, may provide audience members with frameworks for evaluating not only AI in general (Brewer et al. 2022; Nader et al. 2022) but also specific applications of it, including AI image generators.

The experimental results, in turn, indicate that seeing frames in social media messages can shape opinions about AI image generators. Framing AI art as real art increased support for it relative to no framing, whereas framing AI image generators in terms of artists’ concerns had the opposite effect. Meanwhile, two-sided framing of AI image generators produced no effects relative to the control condition, consistent with previous findings that competing frames can “cancel out” one another (Chong and Druckman 2007). Framing did not yield discernible effects in every case, particularly when it came to shaping negative beliefs about AI image generators relative to the control condition; however, comparing across framing conditions revealed additional differences between respondents who received the real art frame and those who received the other frames. All told, the experimental findings reinforce and extend previous research showing that exposure to specific frames can influence attitudes about emerging technologies (Cobb 2005; Druckman and Bolsen 2011), including AI (Bingaman et al. 2021).

In drawing conclusions from the study’s results, it is important to consider the limitations of its methods. To begin with, the findings for following technology news and science fiction viewing are based on correlational analyses of broad self-reports. Thus, caution is warranted in drawing conclusions about causal relationships between each form of media use and opinions about AI image generators. With this in mind, future research could collect richer measures of media use, including consumption of specific news sources and types of science fiction content along with other forms of media use, including social media use. Moreover, future studies could draw on experimental and longitudinal approaches to conduct more direct tests of how news use and science fiction viewing shape attitudes about the topic.

The findings from the experimental manipulation of exposure to tweets provide stronger evidence of message effects. Yet these results should be considered with several caveats, as well. In terms of internal validity, the use of real tweets makes it impossible to isolate which specific message features—text, images, source, or some combination thereof—influenced respondents’ opinions. The use of these tweets does enhance the external validity of the study’s findings, but the experiment only incorporated a small set of Twitter messages about AI image generators. Thus, future research could test the impact of messages with other frames and messages on other platforms.

Such research could also test the extent to which framing effects on attitudes about AI art persist over time and with additional exposures to frames. Previous studies have yielded mixed evidence on the durability of framing effects produced by single exposures to frames (Druckman and Nelson 2003; Lecheler and de Vreese 2011). Thus, the sorts of effects observed in the present study may be ephemeral. Repeated exposures to the same frame over time could reinforce such effects; however, subsequent exposure to competing frames could neutralize them (Chong and Druckman 2007). In addition, the durability of framing effects may depend on the order in which audience members receive competing frames (Matthes and Schemer 2012).

Finally, the present study captured only two types of responses: support for AI art and negative beliefs about AI image generators taking jobs from artists and stealing their art styles. Future research could examine whether patterns in media use and exposure to specific media messages also predict other responses, including anthropomorphization of AI image generators, evaluations of AI artworks, and attributions of responsibility for AI art (see, e.g., Epstein et al. 2020; Funk et al. 2023; Hong and Curran 2019; Lima et al. 2021; Mikalonytė and Kneer 2022).

Considered within the bounds of these limitations, the present study’s results contribute to an emerging literature on public responses to AI image generators. Specifically, the findings highlight how patterns of media use and exposure to media messages can predict such responses, as well as how framing theory may help account for these relationships. The results presented here suggest that both news use and science fiction viewing act as double-edged swords when it comes to attitudes about AI generators: each form media use is tied to support but also negative perceptions. Extending this logic, future shifts in the nature of news coverage or entertainment media portrayals toward more social progress framing or more Pandora’s box framing could alter these links in ways that bolster or erode public acceptance. Similarly, the experimental findings suggest that changes in the balance of messages about AI image generators could sway the public toward greater support or opposition. As such, the findings provide starting points for understanding how media use and media messages may help shape the trajectory of public opinion about AI generators—and, ultimately, their adoption, use, and regulation.