1 Introduction

Information Communication and Media (ICM) is a rapidly growing field that integrates society, culture, and technology (Fuchs 2009). Amid the hype surrounding artificial intelligence (Newlands 2021), professionals in ICM fields such as telecommunications, journalism, entertainment, marketing, social media and information technology now face the question of whether they can trust the technologies they help to create. Although human–robot communication is a critical topic in academic research over the last decade (Feher and Katona 2021), research on the challenges and future visions of ICM systems are sporadic or under-represented in academic publications and strategic AI documents. This is why it is particularly relevant to investigate the visions of ICM professionals.

The key research question is how ICM fields are changing due to AI technology and what the key consequences are expected to be. Our study design was deliberately open to obtain a wide bandwidth of answers to the questions asking about the possible futures of AI. Sub-questions addressed the advantages, benefits, disadvantages and uncertainties of AI, and probed for reflections on all the changes this may bring. We also wanted to understand the transformation in the ICM process and related changes in human–social values vis-à-vis the emerging AI environment (Feher 2020).

This research is especially timely if we consider the rapid spread of AI-driven ICM phenomena such as conversational media, deepfakes, recognition systems, AI-driven audiovisual media, bot journalism, generative music, social media, recommendation systems or synthetic media (Hight 2022; Trattner et al. 2022; Hartmann and Giles 2020). Text and audiovisual content produced by AI can be easily accessible and persuasive, even if it may be biased, offensive or misleading (Illia et al. 2023; Jackson and Latham 2022), implying trust issues as well as related concerns over authorship and verifiability. The outputs of current AI services are between 50 and 70% accurate, while they also produce false or invented things, so-called “hallucinations” (Lin et al. 2021). Thus, authenticity, deception and trust have become fundamental issues (Glikson and Asscher 2023; Hancock et al. 2020), also in policy research (Pierson et al. 2023).

AI-generated ICM and generative AI increasingly reach a broad audience globally (Pavlik 2023; Kemp 2022) and influence decision-making, investment and policy development, understanding the visions and expectations of key professionals involved in this shift is crucial. If we can assume that experts trust the technology enough to invest their resources into the future of AI, even if they express serious concerns (Feher and Veres 2022), they can be seen as “technological trust mediators” (Bodo 2021), prominently involved in the socio-cultural construction of AI (Feher and Katona 2021). Thus, the study assumes that ICM experts are ambassadors of AI technology in terms of awareness of social and ethical issues, shaping the field for years to come.

According to the analysis of the survey, the participants’ visions can be mapped as a Glasses Model of AI Trust, representing their way of balancing hopeful beliefs, growing concerns, and overall uncertainties. This model visualizes how key values related to the potential of personalized and unbiased ICM operate in relation to human values, observable limits of AI, and structural reliability issues. The dynamics of believed and uncertain effects as represented in our model suggest that the future vision as expressed by our study participants is optimistic, where problems tend to be interpreted as opportunities and even as advantages. In this way, the Glasses Model of AI Trust allows us to understand a dominant operational logic informing current and upcoming innovations.

2 Theoretical framework

In the background of all-encompassing info-communication systems (Kovtun et al. 2022), mediatization (Hepp 2020; Feher 2022), platformization (Van Dijck 2021) and artificial intelligence technologies (AI) develop and spread to extend human–machine communication (HMC, Guzman et al. 2023), AI-mediated communication (AI-MC, Hancock et al. 2020), generative AI (Pavlik 2023) algorithmic and proxy media (Blanchett et al. 2022) or super-human intelligence for interactivity (Guzman et al. 2023). These trends engender diverse computational and AI agent roles encompassing senders, receivers, communicators, mediators, producers, editors, authors, creators, designers, analysts, distributors, moderators, and fact-checkers. Although it is useful to delimit the research focus (Hancock et al. 2020), expansive research allows the exploration of a vision of a distant future for free, abstract, or specific associations. Especially in the case of complex and rapidly evolving AI technology with influenced trust attitudes and decision-making, among others in the case of media consumption (Araujo et al. 2020).

Although AI is a set of sophisticated agents, it tends to be interpreted as a black box technology: the machine operation remains largely hidden from human comprehension (Rassameeroj and Wu 2019; von Eschenbach 2021). The concept of black box technology is still dominant and creates several uncertainties —even if a concept of glass box transparency is emerging, promising a more understandable machine behavior for building trusted AI (Toy 2023). At the same time, AI is a driver of transforming digital services for numerous areas of applications. Humanity and AI technologies mediate one another via interaction and collaboration (Borsci et al. 2023; Verbeek 2015), despite such uncertainties and critical concerns. The datafication and deep mediatization of society are rewriting the political economy of society and IT industries (Brevini and Pasquale 2020), as well as everyday life, leading to extremely high expectations or future nightmares expressed by different stakeholders (Mansell and Steinmueller 2020). In doing so, the responses and expectations surrounding AI mirror earlier technological transformations, which inspire both utopian and dystopian scenarios (Feher and Veres 2022).

Of seminal concern here is what drives AI regarding ICM with the branches of several technologies to accelerate existing processes (Hui 2021) and introduce new pathways. We can distinguish at least two driver functions in the case of ICM. First, AI communicates, interacts and audio-visualizes (Fletcher 2018), thus it makes itself perceptible to humans. In short, it represents itself via ICA as a surface of the technology. This process is an elementary way for humans to sense and reflect on the technology they create directly. Thus, the products of AI-driven ICM are one of the most controversial fields. Second, image, sound, and text have become AI-mediated in datafication and algorithmic operations (Ellis and Tucker 2020). The technology does not only support information flow in this way but also restructures the concepts of previously known new media and computer-mediated communication (Guzman and Lewis 2020). Big/smart/synthetic data, machine and deep learning, neural and recommendation networks train algorithms to determine what is popular, personalized or fake in the temporary relevant systems.

Considering these two driver functions, we can theorize the meaning of AI in the particular context of ICM fields following the approach to media as socio-technical systems, whereby media are seen as the “intersections of technical knowledge, humanistic investments, social relations, economic models, political stakes, and aesthetic expression that people use to understand and shape their lives” (see: uscmasts.org). The term “info-communication” also deliberately comprises both computer-based and human-driven processes in interaction (Targowski 2019). Therefore, these mutually foundational and complementary processes both enable and constrain human civilization and social inclusion primarily via advanced ICM technologies. Consequently, the application fields of ICM processes can be found throughout various sectors of industry and society.

AI-driven ICM flourish in synthetic worlds (Gunkel 2019) where previous media and communication operations seem less sustainable (Chan-Olmsted 2019), and where the identification of (degrees of) reality significantly depends on machine learning (Waisbord 2018). Cost-effective and productive operation is an industry-wide expectation from to the buoyancy force of AI (Mustak et al. 2023; Preu et al. 2022; Wirtz 2020), even if trustworthiness, reliability, and bias are all at stake. In parallel, a key requirement is to control (and clean up) datasets, fight disinformation and foster truly diverse, inclusive and reliable content (Georgieva et al. 2022). These transformations support benefits and trigger uncertainties, especially as the above-mentioned driver functions reveal challenges of socio-technical issues with inequalities (Holton and Boyd 2021), system biases (Rawat and Vadivu 2022), cultural-economic colonization (Bell 2018), data colonialism (Couldry, and Mejias 2019), data-driven surveillance with privacy issues (Fossa 2023) and political destabilization through fake campaigns (Borsci et al. 2023). The interactivity and virality of personal assistants, virtual influencers, AI-produced content and art, or deepfakes leads to fundamental questions such as how information sources can be evaluated or how they will add value to an existing ICM process or service. Furthermore, the livelihood of many practitioners in creative roles in these fields seems to be at stake (as exemplified in the 2023 writers’ strike in Hollywood).

Accordingly, ICM experts working in various areas face several challenges as the complexity of their field of expertise grows (Swiatek et al. 2022). Besides being active in their fields, they also contribute to setting directions for the future – their own as well as that of generations to come. Given the fact that their responsibility is inherently connected to minimizing human errors and maintaining trust in AI systems, the pressure is on (Borsci et al. 2023; Ryan 2020; Amaral et al. 2020). This study aims to understand their current interpretation, future projections and the dynamics of their visions.

3 Assumptions and the research questions

Considering the theoretical framework and its key components, four assumptions frame our project:

  • AI fundamentally transforms ICM systems in several ways (Ellis and Tucker 2020; Guzman and Lewis 2020; Chan-Olmsted 2019; Gunkel 2019; Fletcher 2018; Waisbord 2018).

  • Multifarious benefits are available, from cost-effective operation to productive work processes (Mustak et al. 2023; Georgieva et al. 2022; Preu et al. 2022; Wirtz 2020).

  • Various uncertainties and dangers exist, from fake media and systemic bias to the possibility of abuse (Feher and Veres 2022; Borsci et al. 2023; Rawat and Vadivu 2022; Holton and Boyd 2021; Bell 2018).

  • Socio-cultural values are discussed in parallel to trust issues (Borsci et al. 2023; Feher and Katona 2021; Ryan 2020; Amaral et al. 2020).

The assumptions were transformed into a series of explorative research questions allowing for free associations.

RQ1. How are ICM processes changing due to AI technology?

RQ2. What are the benefits of the change?

RQ3. What disadvantages shape the changes?

RQ4. How do socio-cultural approaches change, and how does this process affect trust in AI technology?

4 Method and sampling process

We conducted the survey online with a sampling approach based on sufficient diversity of member characteristics and the four research questions guiding our project (Jansen 2010). In addition to a first set of demographic questions, participants received two or three complex questions per week and were given four weeks in which to answer them. This schedule allowed the participants to (1) plan their time (2) be engaged in the process (3) have time to recall their knowledge or check academic/professional sources for more detail (4) give complex answers to broad questions on a weekly basis.

With the goal of obtaining a diverse expert sample in a multidisciplinary field, the opening round of survey questions addressed world regions, gender, profession, sector, years of experience, and background (academia, business, policy-making or NGOs). These are all relevant to understanding different perspectives, especially in the case of a relatively small sample. A second round of questions was derived from the RQs, broken down into detailed subquestions while accounting for the two time dimensions of present and future:

  1. 1.

    What is the meaning of AI technology to you? (Question of AI terminology)

  2. 2.

    How would you imagine your life in 2050 when you communicate, entertain, and acquire information via AI-generated or AI-supported services? (RQ1)

  3. 3.

    Predominantly, how does AI technology shape the media, information and/or communication process inside and outside of your industry? What will change in this field in the future, till about 2050? (RQ1)

  4. 4.

    What are the key benefits of AI applications in the case of media, information and/or communication technology? What will change in this field in the future, till about 2050? (RQ2)

  5. 5.

    How does AI technology support media, information and/or communication production or consumption? What will change in this field in the future, till about 2050? (RQ2)

  6. 6.

    What are the disadvantages of AI applications in the case of media, information and/or communication technology? What will change in this field in the future, till about 2050? (RQ3)

  7. 7.

    How does AI technology generate issues and uncertainties in the future from media, information and communication technology? What will change in this field in the future, till about 2050? (RQ3)

  8. 8.

    Does AI technology change our social-cultural values and norms via the media and information-communication technology? If yes, how? If no, why not? What will change in this field in the future, till about 2050? (RQ4)

  9. 9.

    Does AI technology change our social-cultural values and norms via the media and information-communication technology? If yes, how? If no, why not? What will change in this field in the future, till about 2050? (RQ4)

More than 300 experts were selected and invited based on their LinkedIn profiles and 42 participants joined from all world regions. Of these, 25 participants engaged with detailed answers throughout the sampling process for four weeks. Every week, they received a link with survey questions, which they had one week to answer via a Google Form. The participants joined voluntarily and anonymously, consenting to a GDPR-compatible contribution to the sampling which was conducted in 2022.

The respondents summarized their experience, observations, practice, visions, and examples. We rewarded the 25 participants for their engagement with an executive summary of the results. This ratio is suitable because of the complex topic, the duration of the survey (several weeks) and the lack of additional rewards.

After compiling the whole dataset in one database, data cleaning was applied to eliminate mistyping and other language errors. The final dataset contains 21,441 words in the whole sample and 18,831 words with just the answers of the 25 participants. NVivo qualitative software (version 11) was applied for detailed analysis. Horizontally, the answers were automatically coded in terms of the survey questions, while manual codes were applied vertically to identify and synthesize the most common topics, considerations, beliefs and uncertainties. Two authors independently performed manual coding line-by-line for the credibility of the analysis. The cross-checked coding also supported making memos about the most relevant and agreed-upon patterns in the texts for interpretative analysis. We detail the survey results and analysis in the next section.

5 A diverse sample

The sampling method yielded diverse member characteristics (Jansen 2010). The 42 participants of the first week (hereafter entire cohort) came from all world regions and had an average of 5–10 years of work experience in several fields (see Table 1). From the entire cohort, 25 participants as the subset cohort engaged for the whole sampling period, representing primarily female and academic respondents, all age groups and almost all world regions from about half of the disciplines and sectors (see Table 1). Despite the diversity of the sample, there were surprisingly homogenous answers with similar visions. Only the African respondents highlighted quite different perspectives, raising some significant concerns about the uneven impact of AI. Since there were more female respondents gendered concerns were also available with examples of a high rate of male AI professors, under-represented female data, and the negative impact of AI-generated images in porn and social media. In addition, experts from academia and NGOs were the most sensitive to issues of ethics and trust.

Table 1 The sample

6 Findings

6.1 Terminology and future vision of AI

Respondents from the Entire Cohort defined AI as either advanced technologies or automated systems – or merely their fetishization. In both cases, machines are trained to achieve goals like problem-solving and predictive analytics. A few emphasized instead the machine’s capacity for learning and mental tasks.

Although not all participants are likely to be alive in 2050, they formulated a vision of a supportive AI system to work instead of humans as well as for humans. Advanced search engines and wireless connections are expected to enable direct access to all information. Sociable AI combining virtual assistants and humanoid robots, known as Computers as Social Actors (CASA), will enhance interactivity. Everyday routines will be assisted resulting in AI-generated personalized experiences significantly affecting human decision-making and emotions. Moreover, a few respondents envisioned perfectly personalized AI bubbles catering to individual human needs and desires.

Well-being, security, and sustainability via technological developments were the focus of the answers. Most respondents discussed technological determinism to avoid dystopic sci-fi scenarios and political misuse. This approach stresses the need for AI governance and ethics to apply human–AI collaborations. Beyond the advantages and general changes, some concerns raised were dependence on machines, risks of an outage, information wars, growing inequality, limits in accuracy, reduced human communication, growth of AI-like behavior in entertainment, security issues or AI-produced carbon footprint. The majority were rather optimistic and believed in the power of NGOs, art, youth empowerment and the rise of fringe regions with AI-driven decolonization and recolonization.

In conclusion, experts from the entire cohort offered a broad definition of AI affecting all aspects of social-political life, with AI potentially supporting human prosperity significantly, although they also raised a few concerns. Strong words, such as “hope” and “believe”, support the positive approach. Their technology-based vision of the future is fundamentally deterministic and rather optimistic.

6.2 Horizontal results: answering RQs

The subset cohort formulated a present-based vision of the future. Accordingly, two parallel worlds were defined for RQ1 by the experts:

  1. 1.

    automated information and data processing for algorithmic news and fact-checking;

  2. 2.

    misinformation, cheap fakes, semi-true news, and the misuse of AI technology.

They listed several AI-powered ICM methods and tools, including biometrics, social media, and data analysis, with the most popular applications, such as Netflix, Twitter, Facebook, TikTok or the deepfake video of Volodymyr Zelensky. One of its respondents concluded that “AI is both a medium and an entity/actor”, which connects with literature such as Ellis and Tucker (2020) and Guzman and Lewis (2020), confirming the relevance of the research.

All participants agreed that there is a key conflict between democratic values and social control, although they could not predict how this dynamic will change. Yet, the majority of the respondents had a positive future vision of universal access to trustworthy AI in a democratic way, supported by AI-driven control and AI-generated mitigation of potential adverse effects.

A few participants proposed the idea that digital communication with nature will also be a source of AI improvements if interactions between animals and plants are connected to big data and smart systems. Thus, we formulated the term “nature listening” for this feature, based on the practice of “social listening” (Stewart and Arnold 2018) in terms of the advanced monitoring of social media activities by platform companies. The answers to RQ1 are summarized in Table 2, Row 2.

Table 2 Summary of the answer to RQs

Answering RQ2 about potential benefits, respondents’ “imagination” centered on all-the-time available information from cost-effective production to preventing negative impacts. At present, touchpoints support the funneling of experiences by chatbots or virtual assistants, such as Amazon Alexa and Apple Siri. In addition, virtual media and art creators impact human perceptions from music to visual images, such as AIVA, Dall-E, ChatGPT, Midjourney or PearAI.Art. The key challenges are legal, ethical and reliability issues, media fragmentation, social inequalities, the reproduction of colonialism or decreased creativity. Recalling the literature review, almost all key issues are repeated here.

The subset cohort participants felt that, by 2050, AI-driven ICM will influence how users experience their all-surrounding circumstances beneficially if human narratives are accelerated and augmented. They highlighted an omnipresent AI-media with demystification of the ICM process if every human agent is wired. Open AI options were mentioned most often in connection with reaching a healthy democratic future with digital sovereignty and the power of communities. See more details in Table 2, Row 3.

RQ3 focused on the disadvantages and negative aspects of AI-driven ICM (Table 2, Row 4). According to the respondents, AI-driven ICM has drawbacks, primarily related to uncertainties caused by unreliable and biased data. Misinformation, filter bubbles, non-filtered information overload and machine-dominated communication and news production exacerbate these uncertainties. Without proper regulations and ethics, this can decrease industry value and harm societal processes. Accordingly, two scenarios were predicted most frequently for 2050:

  1. 1.

    Lack of diversity and equity in the AI-driven ICM field, resulting in semi-fake news and the suppression of freedom of speech due to redefined censorship, centralization of ownership and exclusive gatekeeping functions

  2. 2.

    More reliable and accountable AI systems through transparency, regulation and the right to be forgotten. Thus, legal frames and innovation should reduce uncertainties, even if one female academic participant in AI Marketing from Europe emphasized: “Only one thing is certain: uncertainty. But what’s also assured is that organizations will unlock value from AI and encounter challenges in unexpected places.”

RQ4 asked about trust in AI technology and socio-cultural standards in the ICM process. Respondents noted that while we are only beginning to understand the impact of AI on society, economy, and culture, fundamental issues such as profit maximization and societal asymmetry remain unchanged. AI cannot independently alter socio-cultural values and norms but might amplify or repress them, leading to new forms of abuse and violence. A few participants imagined socio-cultural changes but without specifications, such as.

“Meanings are being affected through our experiences with AI and our interactions with social chatbots and other sociable AI. These meanings become part of our social discourse and socio-cultural values, norms and codes” — A male academic participant in communication, North America

Others mentioned two more examples of social change:

  1. 1.

    Religions may become more open and tolerant in communicating on AI-driven platforms, inspiring different practices worldwide.

  2. 2.

    Data sets can act as colonizing languages and cultures, thereby decreasing diversity, exacerbating existing inequalities, and social biases and resulting in further fragmentation.

Participants envisioned that by 2050 that bots, tokens, and synthetic accounts will drive human perception and communication resulting in a more pervasive role of technology throughout all aspects of everyday life. As one respondent expressively formulated:

“It may sound dystopic, but we are not very far from an internet overflowing with automatically generated content, in which the opinions of real people will be buried under a mountain of generated tokens. Another way in which AI affects human opinions is through recommendations, these focus on eliciting certain feelings, such as anger, which make users engage more with the content in hand. This can have effects such as the growing polarization we have experienced in the last decade.” — A male non-academic participant in computer engineering, Europe

Extreme personalization and profiling are expected to decrease cause-and-effect-based human thinking and automate intuition in the ICM field, resulting in ethical issues. While some fear extreme control or misleading information, others do not believe in significant changes in society and culture. Trust in AI technology is not a major worry, as social learning about changes is ongoing and primarily determined by technology, and new generations are already familiar with AI environments.

6.3 Vertical results: beliefs, uncertainties, trust

The most common approaches and considerations of the subset cohort became available for detailed interpretation by manual codes. According to the cumulative results, only a few experts are truly skeptical, expressing their techno-pessimism and surprisingly, only one example addresses reliability issues:

“I will not believe any algorithm and do not consider it a reliable source. The press will not be a reliable source of information, nor will companies [offer] sound opinions, nor will the reports based on data collection.” — A female non-academic participant in entrepreneurship, Africa and the Middle East

The majority believe in what they are doing now for the proper use of AI and look to the future confidently. Even if they articulate uncertainties, they believe AI can support numerous ICM processes effectively — from entertainment to the news. Illustrating the most fundamental driving force of techno-optimism:

“AI will simplify large amounts of data to crisp information adaptable to a form of media, person and place. A place where machine and human can work hand in hand.”

— A female non-academic participant in PR and corporate communication, Asia–Pacific

The other driving force is a negative or skeptical approach, informed by plenty of uncertainties if the intention is the misuse of technology. Uncertainty primarily amplifies trust issues. However, beliefs and uncertainties can be twofold arguments also highlighting the necessity of a reliable technology:

“Like all technologies, it creates uncertainties in how it can be used for harm instead of good in society both intended and unintended. For example, it can suppress freedom of speech very effectively online by deleting specific subjects or ideas. I believe that by 2050 state/company-sanctioned misuse will see more adoption so effective regulations will be put in place for the fair use of AI.” — A male non-academic participant in information technology, Europe.

OR

“I believe that one of the disadvantages of AI is portrayed as an advantage: the scale of AI applications. While scalability per se offers viable solutions for a wide range of applications, it also poses risks to diversity in algorithmic technologies, as well as to local and context-sensitive AI systems that are needed in fields like media, information and communication technologies.” — A female academic participant in media studies, Europe.

Even if the pros and cons are not always so directly next to each other, during the manual coding, it became clear that arguments were primarily driven by personal beliefs (and corresponding uncertainties). Other possible logics, such as advantages vs. disadvantages or utopia vs. dystopia were not reflected. People’s beliefs are the dominant forces for sensemaking and forecasting, resulting in generally neutral or outright positive future paradigms. Uncertainties were expressed less negatively or skeptically in this context. Instead, unknowns about the trajectory of AI seem to facilitate critical thinking, asking questions and rationalizing dominant beliefs and hopes.

A further noticeable result is that a significant proportion of the subset cohort participants argued that they could trust the technology because an AI-to-AI era is coming. To the participants this means that AI solutions will answer problems and threats generated by AI. Trust is built by AI developments for humans. An illustrative example is available below:

“I believe that only AI can help us in the fight of dis-misinformation and we had a good example with the Covid-19 case on Twitter, Instagram and Facebook, for example: whenever the platforms recognized that content was related to Covid-19, it’d signal it to its users and this didn’t only refer to textual information, but AI was able to scrape through image content that was Covid-related and modulate its reach accordingly.” — A female academic participant in the behavioral economy, Europe

In many cases, a trusted future, as well as a definitive hope, is detectable in their replies. The word “hope” mainly appears in the statements of middle-aged European participants with an industrial background and 5 to 10 years of experience in AI. Their hopes further strengthen personal beliefs, which tend to come with a highly positive and reassuring vision, such as:

“By 2050, I hope AI gives everyone the same opportunity to be great.”

— A female business participant in AI Ethics, North America

OR

“Hopefully, the technology of information source verification will develop to the point where these issues may be successfully tackled and resolved.”

— A male academic participant in philosophy and visual communication, Europe

Summarizing these results, a unified AI future was envisioned by almost all participants of the subset cohort as their beliefs and hopes for AI impact in the ICM process by 2050. This way of sensemaking and forecasting highlights a reciprocal dynamic between personal beliefs and expressed uncertainties. In addition, the high word frequency of hopes, beliefs, uncertainties and their synonyms suggested moving forward in the vertical analysis to analyze the contexts of these words manually. After cross-checking by the authors, beliefs and uncertainties could be separately coded and paired. By the context analysis, this result is directly connected to trust issues or trusted AI by hopes and critical thinking. These beliefs and uncertainties in a trusted AI context are critical when shaping policy agendas and developmental trajectories (Gross et al. 2019).

Accordingly, we propose the Glasses Model of AI Trust to summarize the essence of how this dynamic generates a future vision by the experts participating in our survey (Fig. 1).

Fig. 1
figure 1

Glasses Model of AI Trust

The “"glasses model”" is a conceptual framework to understand how trust is balanced between beliefs underpinned by hopes and uncertainties fostered by critical thinking. Within this model, trust emerges as the fulcrum, maintaining a balance between these two contrasting elements resulting in dynamic strategies. Through the strategically combined lenses of beliefs and uncertainties, the experts are, on one hand, involved in shaping technology and, on the other, see future alternatives. Their personal beliefs tend to strengthen trust in AI applications, while their uncertainties foster critical thinking without veering toward dystopianism. Trust in AI represents a balanced position or agreement that the technology can be ultimately reliable.

Our findings indicate that the key drivers of the model are hope and critical thinking. The participants hope AI integration will ultimately harmonize with human cultures, societies, and natural ecosystems. They believe that technology-driven effectiveness enables a greater focus on human creativity and imagination, while also fostering fairness, inclusivity, and the promotion of human rights. To illustrate this point:

“I envision a future in which the models upon which AI is developed will be perfected and – if regulated ethically and morally – will help us find a balance between freedom of speech, inclusivity, and the eradication of fake-news.”

A female academic participant in behavioral economics, Europe.

The other driver, critical thinking, is deployed when considering how AI might change job market dynamics and accelerate distrust. Two quotes that illustrate such concerns:

“Some jobs are going to be replaced. Some will disappear. And some new jobs will come along. Human workers need to enhance the core competitiveness.”

A female academic participant in communication and media, Asia

“There are concerning issues related to how deepfakes/synthetic media can promote distrust and misinformation, especially in today's infodemic and post-truth world. Introducing even more information synthetically, such as AI is wont to do, could worsen these issues.”

A female academic participant in AI policy, Africa, and the Middle East

Hope and critical thinking can be seen as coping mechanisms in striking a balance between beliefs and uncertainties, specifically for appreciating the role of trust in AI. Belief-based visions focus on the expected democratization of global conversations as language barriers disappear:

“By 2050, I believe people will break language barriers [with the help of AI]. Language translation will probably be solved by that time, with services automatically translating the language accurately in real time. The internet will be more integrated as the language barriers fade away.”

A male academic participant in NLP and IT, Europe

On the other hand, for example, uncertainties take into account the challenges of polarization, disinformation, and anti-democratic movements amplified through AI applications:

“AI systems could be utilized for anti-democratic purposes such as citizen surveillance and intimidation of activists and dissidents. We need cross-cultural AI ethics that take these dangers seriously.”

A female academic participant in marketing, Europe

Taken together, the beliefs and expressed uncertainties of AI experts give rise to strategic dynamics regarding (the future of) AI, balancing optimism with realism and ambitious yet practical guiding AI developments. These dynamics integrate beliefs and foster potentially innovative solutions with regard to uncertainties about societal and environmental impacts. As our study participants reported, all of this may enhance trust through transparent, realistic expectations of AI’s capabilities and limitations. One of the participants stated:

“An ethical foundation for tech is a must for humanity. Cross-machine learning will empower humans even more, subject to quality data and data trust.”

A female academic participant in knowledge management and tech consulting, Europe

The glasses model, therefore, underscores responsible usage and societal well-being for the distant future over profit maximization or political control. Although optimism is essential for the model, it also acknowledges potential negative impacts, emphasizing a balanced and realistic approach. Therefore, strategic dynamics in the trusted AI model represent a value-laden horizon for comprehensive policy-making, responsible business practice, and risk management.

The Glasses Model of AI Trust can be used to understand better the reasoning and rationale behind the development of further AI services within the ICM field, especially when reliability or ethical issues arise. Beliefs prioritize available opportunities by AI technology, and uncertainties clarify the direction leading to priorities along with appropriate questions. Examining the vertical results accordingly, an extended model summarizes the mostly techno-optimist beliefs and subsequent critical questions in cases of uncertainties (Fig. 2).

Fig. 2
figure 2

Glasses Model of AI Trust with a specification for ICM

Most respondents emphasized how responsible thinking increases in the case of black box technology, primarily if AI impacts free will, privacy, power over people’s ability to make choices, think and interact effectively with others. Respondents argued along the lines of vision-focused questions and strategic proposals when broad questions were raised, such as “what kind of a society do we want to be?” We also found a broad perspective with a concept of interconnectedness (Herbrechter et al. 2022) of how humans, machines and nature can connect in a sophisticated manner: if these ecosystems can learn from each other, it may lead to a less biased or perhaps even an unbiased future. Beliefs define the necessary inputs and scalability for the technology and the positive imaginations of democratized, less-fake, unbiased communication, information production and entertainment — resulting in more significant judgment, better interpretation, and creativity. Such positive beliefs and the identification of potential benefits were much more prevalent than any uncertainties among our study participants — which may have something to do with the self-selecting nature of our sampling method.

7 Discussion and conclusion

The paper aimed to present research in current and future AI-driven ICM process, surveying experts working in related fields and representing all world regions. Recalling the original assumptions, the experts have agreed on the first two:

  • AI transforms ICM systems in several ways (Ellis and Tucker 2020; Guzman and Lewis 2020; Chan-Olmsted 2019; Gunkel 2019; Fletcher 2018; Waisbord 2018), and

  • multifarious benefits are available from cost-effective operation to productive work (Mustak et al. 2023; Georgieva et al. 2022; Preu et al. 2022; Wirtz 2020).

The third assumption focused only on fake media, systemic bias and the possibility of abuses, given the context of profound uncertainties regarding AI (Borsci et al. 2023; Feher and Veres 2022; Rawat and Vadivu 2022; Holton and Boyd 2021; Bell 2018). However, this particular survey showed more comprehensive dynamics of visions strongly determined by participants’ personal beliefs and negotiation of uncertainties, which we modeled as a pair of glasses on a fulcrum — a Glasses Model of AI Trust.

The model is the key contribution of this paper to highlight emerging perspectives of AI among people who work in the field and who are responsible for implementing these technologies. The model allows for the analysis of abstract concepts, such as how an interconnected human–nature–technology ecosystem can improve AI, or how AI developments can fight against AI misuse. Thus, the model can be extended to different disciplines and industries. This model may contribute to testing specialized applications, future planning or evaluating other AI innovations. Policy agendas, developmental trajectories, and academic research are also invited to test the model and discuss the results. It seems clear that while these experts are keenly aware of the pitfalls and problems of AI, their own beliefs and roles mitigate any profound concerns, as some participants went so far as to suggest that, in the future, AI will be the best defense against any problems caused or exacerbated by AI.

Last but not least, the fourth assumption was not accepted. According to the respondents, fundamental human/social values will not change. Although semi-truth news and demystifying the media process will be a new norm, such processes were not seen as directly affecting democratic values. Participants put much faith in new generations to continue to build balance and trust in emerging AI systems, which would preserve key social values.

To sum up, the experts linked their visions to the present and general AI discourses without abstract or specific associations. Two primary factors can result in this finding. First, their professional routines and responsibilities often lead them to view the distant future as a simple extension of the present. Second, they can be in professional near-sightedness of trending knowledge of AI, especially if they are they do not exclusively see themselves as responsible for the future directions. Both of these factors might be the reason behind the (over-)optimism as well.

The over-optimistic scenarios express hopes and beliefs, passing the responsibility on to future generations, technology and nature-inspiration. Although we did not set out to critique or question our study participants about their sensemaking process in technology, techno-optimism holds risks and may fail to deal with worst-case scenarios (Del Rosso 2014).

Some themes do not have enough data to support them or the data are too diverse (Braun and Clarke 2006). Also, generative AI (van Dis et al. 2023) was not much hyped during the sampling process. These topics are missing and call for further research to understand the field from more diverse perspectives, disciplines and expertise.

Unexpectedly, the participants rarely mentioned certain fundamental topics of the field, such as platformization, the Metaverse, mixed realities, machine learning, sentient technologies, and more. Possible reasons for this could be that (1) these are not necessarily the future which they expect, (2) they do not use these applications regularly, or (3) they are unaware of the black box technology behind these services. We also cannot exclude the possibility that considering profoundly controversial aspects of AI is in itself problematic for those whose livelihoods are connected with the implementation and further development of AI. Their roles and commitment to their work may cause them to envision more favorable future directions, leading to a mostly optimist and partly pragmatist approach, according to Makridakis’s typology in AI scenarios (2017).

To summarize the key result, participants shared a positive and even hopeful vision of an AI-driven ICM process. While they were aware of urgent questions regarding the growing technological determinism in how these fields (and their work) develop, they chose to remain hopeful based on personal beliefs and a limited questioning of remaining (and emerging) uncertainties regarding AI. They believe new generations are learning from the lessons of AI so far. Thus, they are passing the responsibility to the next generation and to new AI developments.

Consequently, the experts are not overly concerned about negative future scenarios. Although with this approach they also stated that they and their generation are not the only ones responsible for future AI in ICM, their optimistic interpretations and visions make them aware of their role as technology ambassadors. Thus, the experts in this study are fulfilling their role of being trust mediators (Bodo 2021).

Recalling one of the key messages of the experts, AI is both media and an entity/actor. Accordingly, AI does not only drive ICM but is seen as capable of fighting the disadvantages or misuse of advanced technology. According to the participants, the AI-to-AI era is coming, but AI-driven ICM will not radically transform social structures or values in this process. Moreover, the ultimate goals emphasized, such as focusing on well-being, security and sustainability; or avoiding colonization, political misuse and growing inequalities, can be feasible. Especially if universal access to trustable AI and “nature listening” confirm or improve social values, a hopeful future is envisioned. It is a rather optimistic, AI-driven future, where the paradox between a firm conviction of the sustainability of human values and an overreliance on AI-to-AI systems to remedy any problems and uncertainties in the process remains a blind spot for these experts, insofar as our survey is concerned.

In conclusion, the conflict between reliable and fake systems returns in the context of beliefs and uncertainties, but a value-saturated, typically reliable, AI-dominant vision emerges. Experts believe that what they build and trust in becomes available along the lines of hopes and uncertainty-based critical thinking. This positions them as mediators and ambassadors of AI connecting the current trends to the distant future, investing in a so-called “trusted AI”, even though it remains unclear from their answers what this trust would be based on – other than on more AI. Furthermore, they tend to disregard the possibility of AI assuming their expert roles as an agent in a relatively brief timeframe, producing data- and algorithm-driven visions and more diverse scenarios. It might be simply cognitive dissonance.

Finally, it is necessary to mention that the sampling process was completed just before the announcement of ChatGPT and the hype around generative AI. Consequently, this was last-minute research to receive long text-based responses from human experts without assuming machine-generated support. In the future, such research will be harder to imagine. The consequences of this need to be incorporated into the analytical work — particularly if this work will also be supported by AI. The next question concerns what AI future will be forecast by applying joint human-AI analysis.

8 Limits

First and foremost, the sample was rather small and less diverse in terms of engineering knowledge, even if engineering and computer science are the dominant areas of expertise in the scientific literature regarding the question of the process. One of the reasons for this was that significantly fewer experts defined themselves on LinkedIn as being active in ICM, combined with engineering or computer science. Another reason could be that the survey questions seemed less relevant to them. Third, the participants provided relatively less new information than the available knowledge we could gather from the literature and trade publications. This is surprising, considering that the responses came from all regions of the world and several academic and professional fields. However, this result had two advantages. On the one hand, it summarizes the currently available knowledge and its limits. On the other hand, it made a more conspicuous interpretation of the professional role and attitudes. It comprehensively affects the ICM process and the direction of developments in AI broadly.