Abstract
AI/ML increasingly impacts the ability of humans to have a good life. Various sets of indicators exist to measure well-being/the ability to have a good life. Students play an important role in AI/ML discussions. The purpose of our study using an online survey was to learn about the perspectives of undergraduate STEM students on the impact of AI/ML on well-being/the ability to have a good life. Our study revealed that many of the abilities participants perceive to be needed for having a good life were part of the well-being/ability to have a good life indicator lists we gave to participants. Participants perceived AI/ML to have and continue to have the most positive impact on the ability to have a good life for disabled people, elderly people, and individuals with a high income and the least positive impact for people of low income and countries from the global south. Regarding indicators of well-being and the ability to have a good life given to participants, we found a significant techno-positive sentiment. 30% of respondents selected the purely positive box for 28 of the indicators and none did so for the purely negative box. For 52 indicators, the purely negative was below 10% (not counting the 0%) and for 10 indicators, none selected purely negative. Our findings suggest that our questions might be valuable tools to develop an inventory of STEM and other students’ perspectives on the implications of AI/ML on the ability to have a good life.
Similar content being viewed by others
1 Introduction
The ability to have a good life depends on many social parameters such as employment (Crow and Payne 1992), social status (Gehl and Ross 2013), geographical location (suburbs) (Greenbie 1969), food security (Neuwelt-Kearns et al. 2021), social norms (Hansen 2015), physical health, and socioeconomic status (Xu et al. 2020), caring (Colombo 2014), sustainable living (Hansen 2015), respecting (Malti et al. 2020), being respected (Steckermeier and Delhey 2019), and power over own one’s life and experience of discrimination (Holmström et al. 2017). The UN Convention on the Rights of Persons with Disabilities is a checklist to indicate actions needed to enable more opportunities for a good life for disabled people (Johnson 2013; Kakoullis and Johnson 2020) and the same is said for children and the UN Convention of the Child (Brusdal and Frønes 2014; Kutsar et al. 2019). What is seen as a good life has changed over time (Strachan 2010) and many views on what entails a good life exist (Beckman 2018). Many measures with various sets of social indicators exist that can be seen as measures of the ability to have a good life such as The Better Life Index, The Canadian Index of Wellbeing, The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix, The Social Determinants of Health (SDH) and others (Wolbring 2021). It is recognized that Artificial Intelligence (AI) and Machine Learning (ML) are impacting many facets of people’s ability to have a good life (Canadian Institute for Advanced Research (CIFAR) 2018; European Group on Ethics in Science and New Technologies 2018; Floridi et al. 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019). Increasing student’s social impact literacy is one goal of AI/ML education (Chiu et al. 2021; Furey and Martin 2019; Garrett et al. 2020; Touretzky et al. 2019) and STEM education (Josa and Aguado 2021; Kelley and Knowles 2016). To connect with the world of students, we phrased our social implication of AI/ML inquiry in the language of the ability to have a good life to gain knowledge on how STEM students perceive the societal impact of AI. We asked three questions: (1) What abilities do you see as important to have the ability to have a good life? What is the impact of AI/ML on the ability to have a good life for different social groups? (3) What is the impact of AI/ML on all the indicators from the four well-being/ability to have a good life composite measures: (a) The Better Life Index (OECD 2020), (b) The Canadian Index of Wellbeing (Canadian Index of Wellbeing Organization 2019), (c) The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix (World Health Organization 2011) and (d) The Social Determinants of Health (SDH) (Raphael et al. 2020; World Health Organization 2020).
1.1 AI/ML and the ability to have a good life
It is argued that “AI impacts what we can consider the good life” Vesnic-Alujevic et al. (2020, p. 8), how we achieve “goals of wellbeing” and “overall common good” Vesnic-Alujevic et al. (2020, p. 8), and that the ability of a good life “must include an explicit conception of how to live well with technologies” and the 'good life’ means “a human future worth seeking, choosing, building, and enjoying” Vallor (2016, p. 12). It is argued that ethics of AI concerning “the question of the good life and human and societal flourishing” Coeckelbergh (2019, p. 33) is needed and that technological advancement should engage with “questions about the good life and discussions of values” Buhmann and Fieseler (2021, p. 4). The report Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research (Whittlestone et al. 2019) lists many research topics as essential which can be seen to impact the ability to have a good life. Justice, solidarity, equity, and equality are concepts mentioned in many AI governance documents that influence the ability to have a good life (Lillywhite and Wolbring 2020). AI/ML impact various forms of well-being that reflect facets of the ability to have a good life such as emotional well-being (Borjas and Freeman 2019; Fratczak et al. 2019; Khosla and Chu 2013; West 2018), sense of well-being and identity (Abeles 2016), economic well-being (Borjas and Freeman 2019; Fratczak et al. 2019; West 2018), the general well-being of a nation's economy (Press 1982; Ullrich et al. 2016), well-being of society (Reddy 2006), and societal well-being (Aluaş and Bolboacă, 2019; National Academies of Sciences 2018). The report, ETHICALLY ALIGNED DESIGN A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems focuses on how well-being including “societal and environmental well-being” can be improved (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019, p. 1) and suggests various measures.
1.2 AI education
Most education on AI focuses on how AI works, hands-on education, and how to increase AI technical literacy (Chiu et al. 2021; Heintz 2021; Steinbauer et al. 2021). However, AI education including AI ethics education identifies goals to educate on the societal impact of AI (Chiu et al. 2021; Furey and Martin 2019; Garrett et al. 2020; Long and Magerko 2020; Touretzky et al. 2019), covering “bias, automation and robots, law and policy, consequences of algorithms, philosophy/morality, privacy, future of AI, and history of AI” Garrett et al. (2020, p. 274), and diversity and inclusion (Chiu et al. 2021). It is argued that “in order to address the social impact of technical systems, including AI, we need to revisit the way we think about the norms of AI ethics education, and in particular address the tendency towards an ‘exclusionary’ pedagogy, that further siloes” Raji et al. (2021, p. 515). Phrasing the impact of AI/ML within the language of the ability to have a good life may resonate with students and help AI education with its goal to educate on the societal impact of AI (Touretzky et al. 2019) including AI/ML and the social good (Hager et al. 2019).
2 Engineering education
Social implications are seen as being part of engineering education (Josa and Aguado 2021) and STEM education (Kelley and Knowles 2016) which is seen to have a social impact (Ramirez Velazquez 2021). The National Science Foundation (USA) acknowledges that “scientific merit is intertwined with broader impacts” and that “educational reforms must now center inclusion and equity” Elgin et al. (2021, p. 7). It is argued that the COVID-19 pandemic showed that academia has a responsibility to decrease structural inequities in education whereby one strategy could be to engage STEM students in research (Elgin et al. 2021). At the same time, it is noted that problems exist in STEM education (Garibay 2015; Josa and Aguado 2021). It is recognized that having a positive social impact entices students to enroll in STEM (Bennett et al. 2021) and a social awareness curriculum has an impact on the engineering identity formation of high school girls Burks et al. (2019, p. 1). Various studies investigated the social responsibility of engineering students, seeing it as important but reporting problems (Børsen et al. 2021; Canney and Bielefeldt 2015; Schiff et al. 2021; Tomblin and Mogul 2020). Literature exists covering the competencies students are to obtain from engineering degrees. The authors covering STEM in secondary education argue that OECD’s twenty-first century skills, which include ethical and social impact under the header of communication skills, “has become one of the most important questions awaiting for an answer all over the world” Korkmaz et al. (2021, p. 424). To phrase the impact of AI/ML within the language of the ability to have a good life could resonate with engineering students to engage with the social of AI/ML. Indeed, Plato’s 12 concepts of the ‘good life’ are suggested as a lens to think about the goals of engineers (Rodriguez-Nikl 2021).
3 Methods
3.1 Design
We performed a mixed-methods approach at the technique level (Sandelowski 2000). We used a directed content analysis of the qualitative data and frequency count and percentage measures of the descriptive quantitative data to analyze the answers of STEM students from one University to three questions: (1) What abilities do you see as important to have the ability to have a good life? (2) What is the impact of AI/ML on the ability to have a good life for different social groups? (3) What is the impact of AI/ML on all the indicators from: (a) The Better Life Index (OECD 2020), (b) The Canadian Index of Wellbeing (Canadian Index of Wellbeing Organization 2019), (c) The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix (World Health Organization 2011) and (d) The Social Determinants of Health (SDH) (Raphael et al. 2020; World Health Organization 2020).
We chose an online survey to reach as many student participants as possible and to give students the flexibility to participate in this study at their convenience. The survey received ethics approval from the University of Calgary REB17-0785. The online survey was set in such a way that we could not identify the participants or their IP addresses. The consent form alerted participants that the US government could access data as survey monkey falls under US jurisdiction. Participants could stop the survey at any time and were free to choose which questions they want to answer or not.
3.2 Participants
Students were chosen as participants because student education is an important aspect of STEM degrees as well as AI/ML education. The STEM students we accessed were chosen as participants for convenience purposes. The survey was distributed to four cohorts of individuals from four different University of Calgary STEM-related groups engaged in STEM and engineering extracurricular activities. Our criteria for participant inclusion were that they had to be currently attending a Canadian University undergraduate or graduate studies program. The survey was designed to take students between 30 min and 1 h.
3.3 Survey question development
The full survey consisted of n = 23 questions including demographic, simple yes or no questions with the option for comments (questions 9, 13, 15, 16, 20–23), and open-ended questions (questions 11–12) to obtain more detailed views of participants. It was developed by both the authors and feedback was given by a group of students on the draft of the survey keeping in mind the focus of the study and the literature around AI/ML governance and ethics. We present here the results of a subset of questions namely: (a) demographics (questions 2–6); (b) participants views on the abilities needed for a good life (questions 11/12); (c) participants familiarity with AL/ML (question 14); (d) participants views on the impact of AI/ML on various social groups (questions 15/16); (e) participants views on the impact of AI/ML on indicators of the ability to have a good life from the measures: Social Determinants of Health, Better Life Index, Canadian Index of Wellbeing, and Community Based Rehabilitation Matrix (questions 20–23).
3.4 Data collection and analysis
We collected data through an online delivered survey using the Survey Monkey platform. We sent the link to the online survey to the students through personal contacts after ethics approval was received. The survey data were collected between March and April 2021. Quantitative data were extracted and analyzed using Survey Monkey’s intrinsic frequency distribution analysis capability. The qualitative data obtained from comment boxes that accompanied certain questions and the open-ended question were exported as one pdf file into ATLASti-9® software for analysis (Braun and Clarke 2013; Hsieh and Shannon 2005) and we performed a directed content analysis to understand better ability-related views and knowledge of participants. Regarding the analysis of the qualitative data, the two authors first familiarized themselves with the qualitative data by reading the whole PDF, then re-read the content identifying potentially meaningful data through performing thematic coding on the data (Clarke and Braun 2014). We engaged in peer debriefing (Guba 1981) and differences in codes and theme suggestions were discussed between the two authors and revised as needed. An audit trail was generated using memo and coding functions within ATLASti-9®.
3.5 Limitation
Given that we used an online delivered survey instrument, we could not ask for clarifications of answers. In addition, there might be a selection bias in the sense that only students that were already interested in the topic might have chosen to answer the survey. In addition, our high respondent number of females does not reflect the gender composition in engineering degrees. It might be that females in the clubs felt more drawn to fill out the survey. Our study design is an exploratory one and the intent was not to generate generalizable data. Indeed, our results suggest many follow-up studies to see what the answers are for different sets of participants for example such as students linked to other occupation areas such as science and technology studies, disability studies, ethics, and health sciences.
4 Results
The following sections will present the findings from participant responses. Section 3.1 gives the demographics, Sect. 3.2 students’ view on the ability to have a good life (questions 11/12); Sect. 3.3 STEM student’s perspectives on the impact of AI/ML on the ability to have a good life in the moment and the future (questions 15/16) and Sect. 3.4. STEM student’s perspectives on the impact of AI/ML on all indicators of four measures (Social Determinants of Health, the Better Life Index, the Canadian Index of Wellbeing, and the Community Based Rehabilitation Matrix) (questions 20–23).
4.1 Demographics
The response rate from the students we accessed reflects 13.14% (51 from 388) of the students in that setup. The participant population was composed of 91.67% females and 8.33% males. 97.92% were 18–30 years of age and 2.08% were 30–65 years of age. 97.92% of the participants were undergraduate students, while 2.08% were PhD students. More specifically, 27.08% were first year undergraduate students, 33.33% second year undergraduate students, 29.17% third year undergraduate students, and 8.33% fourth year undergraduate students. The population consisted of a majority STEM students, specifically 60.42% engineering students (6.25% biomedical engineering, 6.25% chemical engineering, 10.42% civil engineering, 6.25% electrical engineering, 18.75% mechanical engineering, 6.25% software engineering, and 6.25% common first year engineering), 2.08% computer science students, 2.08% mathematics and statistics students, 18.75% biological sciences, 4.17% health sciences, 4.17% neurological sciences, 2.08% physiology, 2.08% kinesiology, 2.08% business, and 2.08% other (dual degree in mechanical engineering and business). 89.66% of students felt somehow familiar with AI/ML.
4.2 Abilities needed to have a good life
93.10% of the participants believed they have a good life, while 6.90% suggested they do not know. No participants said they do not have a good life. n = 2 other participants indicated the subjectiveness of assessing abilities that are important to an individual.
P1: “The ability to have a good life is very subjective and kind of insinuates that you have to be happy all the time in life…. It represents the mindset of chasing happiness…”
As to concrete abilities needed to have a good life participants listed the following.
P4: “basic living essentials such as food and water”
N = 6 participants suggested basic needs are needed to have a good life.
P1: “Freedom of thought…”
P8: “freedom to move however, wherever and whenever you want (physical abilities)”
N = 5 participants suggested that freedom, of various forms including speech, physical abilities, goals, religion, etc., are needed to have a good life.
P21: “I do view my physical abilities as something that enables me to have a good life.”
N = 6 participants suggested that physical abilities are needed to have a good life.
P2: “living out your purpose”
N = 5 participants suggested that the ability to live out your purpose is needed to have a good life.
P20: “The ability to connect and form strong relationship with at least one other person.”
N = 7 participants suggested that forming relationships, social interactions, or receiving love is needed to have a good life.
P5: “Having a good life starts and ends with a mindset to achieve satisfaction and/or happiness.”
N = 11 participants suggested that mindset, contentment, and being happy is needed to have a good life.
4.2.1 Factors that impact the ability to have a good life
P20: “One’s socioeconomic status would also impact one’s ability to have a good life.”
N = 7 participants indicated that financial stability or socioeconomic status impacts one’s ability to have a good life.
P8: “Access to necessities such as food and water.”
N = 7 participants indicated that access to basic needs impacts one’s ability to have a good life.
P24: “My mental and physical health”
N = 11 participants indicated that mental well-being and mindset impacts one’s ability to have a good life. n = 4 participants indicated that physical health impacts one’s ability to have a good life. n = 3 participants generalized health as a factor that impacts one’s ability to have a good life.
P10: “the resources available to you in your hometown… the quality of the environment around you…”
N = 4 participants indicated that one’s location/country/environment impacts one’s ability to have a good life.
P6: “A good support system, and the people that surround me could contribute to attaining or not attaining stable mental health.”
P7: “Society, Peers, Family”
N = 9 participants indicated that relationships, support systems, and social interactions impact one’s ability to have a good life.
4.3 Impact of AI/ML on the ability to have a good life
4.3.1 Knowledge of AI/ML
When asked whether participants have knowledge of AI/ML, 62.07% of the participants said yes, 27.59% said somewhat, and 10.34% said no.
4.3.2 Impact of AI/ML on the ability to have a good life in the moment
Table 1 summarizes participants’ perspectives on the impact of AI/ML in the moment on various populations in society. The weighted average of the following groups, disabled people (7.07), the elderly (6.75), people of high income (7.63), and countries of the North (7.15), all lie above the weighted average of the other groups listed (not including responses for 0).
Participants in first year of undergraduate studies perceive AL/ML to more positively impact all groups listed indicated by a high weighted average. Participants in subjects of study not related to engineering/technology including (software engineering, electrical engineering, mechanical engineering, biomedical engineering, and computer science) indicated slightly lower weighted averages for most groups listed above in comparison to all other fields of study.
Participants that elaborated on their choices indicated two main factors to support the increased weighted average of the four groups. n = 6 participants indicated that AI/ML creates wealth disparity and is more easily accessible to the wealthy and developed countries.
P11: “more privileged group will benefit while the non-privileged will get left behind.”
P6: “potential for negative impacts too… worsen the gap between the rich and poor.”
Second, participants referenced the benefits for disabled people.
P18: “allow paralyzed people to access better wheelchairs”
P5: “Artificial body parts”
P2: “potential to be the eyes and ears for people who have disabilities with their senses or neurological disabilities (for the elderly and disabled people).”
4.3.3 Impact of AI/ML on the ability to have a good life in the future
Table 2 summarizes the participants’ perspectives of AI/ML on the indicated groups in the future. Overall, the weighted averages of all the groups questioned are greater compared to participants perspectives of AI/ML in the moment (other than animals and nature). Disabled people and people of high income and countries of the North are still weighted higher than the other groups, but the elderly blended into the weighted averages of the other groups.
Participants in subjects of study not related to engineering/technology including (software engineering, electrical engineering, mechanical engineering, biomedical engineering, and computer science) indicated slightly lower weighted averages for most groups listed above in comparison to all other fields of study.
Similar perspectives are observed from participants’ comments as found in Table 1. Participants still see disabled people benefiting the most from AI/ML in the future. Further, participant comments suggested the future of AI/ML will accommodate all groups more equally.
P4: “…overall these new technologies should equally affect everyone…”
P13: “AI technology is studied further it can be improvised to accommodate everyone equally…”
4.4 Indicators of measures of well-being
4.4.1 Canadian Based Rehabilitation Matrix
Table 3 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Canadian Based Rehabilitation Matrix. Healthcare-related indicators and indicators related to assistance and disabilities including health promotion (46.43%), health prevention (46.43%), rehabilitation (46.43%), assistive technology (67.86%), personal assistance (51.58%), and disabled people’s organizations (50.00%) are seen to have a higher portion of only positive impacts than the other indicators. Indicators that suggest not impacted in a higher proportion relative to other groups include recreation (21.43%), sport (25.00%), and self-help (25.00%).
When comparing participants that are studying fields related to AI/ML (software engineering, electrical engineering, biomedical engineering, mechanical engineering, and computer science) and other areas of study, those in AI/ML-related fields perceive the indicators mentioned about to have a greater positive impact only. Specifically, health promotion, assistive technology, personal assistance, and disabled organizations.
4.4.2 Canadian Index of Wellbeing
Table 4 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Canadian Index of Wellbeing. Data suggest that knowledge and living standards at 48.00% and 40.00%, respectively, are impacted more positively relative to the other indicators in the matrix. The indicators that are seen as not impacted at a higher percentage than other indicators include leadership (25.00%), leisure (24.0%), and time (41.67%). Overall, Table 4 suggests participants see most of the indicators as impacted both positively and negatively.
4.4.3 Social determinants of health
Table 5 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Social Determinants of Health. As seen in Table 5, 50.00% of the participants indicated that health services will be impacted only positively, which is higher than all other indicators. Participants suggested that many of these indicators are not impacted. The following stand out: food security (42.31%), housing (36.00%), gender (42.31%), coping (34.62%), discrimination (34.62%), advocacy (33.33%), physical environment (37.04%), social engagement (30.77%), and social status (40.74%).
Participants that are studying fields related to AI/ML (software engineering, electrical engineering, biomedical engineering, mechanical engineering, and computer science) perceived health services to be more positively impacted at 71.43% compared to fields that are not directly related to AI/ML (chemical engineering, civil engineering, common first year engineering, mathematics and statistics students, biological sciences, health sciences, neurological sciences, physiology, kinesiology, and business) at 42.11%
4.4.4 Better Life Index
Table 6 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Better Life Index. Participants suggested that health is impacted more positively than the other indicators at 44.44% relative to the next highest at 29.63% (safety). This table suggests that participants see most of these indicators as impacted both positively and negatively.
Participants that are studying fields related to AI/ML (software engineering, electrical engineering, biomedical engineering, mechanical engineering, and computer science) perceive health to be a more positively impacted indicator at 71.43% compared to fields that are not as directly related to AI/ML(chemical engineering, civil engineering, common first year engineering, mathematics and statistics students, biological sciences, health sciences, neurological sciences, physiology, kinesiology, and business) at 35.00%
5 Discussion
Our study revealed that many of the abilities participants perceive to be needed for having a good life were part of at least one of the four well-being indicator lists we gave to participants. Participants perceived AI/ML to have and continue to have the most positive impact on the ability to have a good life for disabled people, elderly people, and individuals with a high income and the least positive impact for people of low income and countries from the global south. As to indicators of well-being/the ability to have a good life given to participants, we found a mostly techno-positive sentiment. 28 indicators had 30% of respondents selected the purely positive box, none did so for the purely negative box. For 52 indicators, the purely negative was below 10% (not counting the 0%) and for 10 indicators, none selected purely negative. Our findings suggest that our questions might be valuable tools to develop an inventory of STEM and other students’ perspectives on the implications of AI/ML on the ability to have a good life.
5.1 Techno-positive and techno-optimistic sentiment of AI/ML impact on the indicators on the ability to have a good life
Our general techno-positive (in the moment) and techno-optimistic (perceived positive impact in the future) finding fits with the recognized techno-determinism and techno-optimism biased forms of reporting within the STEM education literature (Collett and Dillon 2019; Cormier et al. 2019; Garcia and Scott 2016; Vigdor 2011). It also fits with a study that found that the positive coverage was greater than the neutral and the negative coverage in the teaching of AI in technical studies (Table 5) (Gherheș and Obrad 2018, p. 8) which is the origin of our participants and that there was a techno-optimistic sentiment towards the social impact of AI development with technical studies (Gherheș and Obrad 2018, Table 10, p. 10). Our results also fit with a study that found a positive perception of the impact of AI on their well-being and society whereby a higher knowledge of AI correlated with the more positive sentiment toward the impact on themselves and society (Jeffrey 2020, p. 12).
Our findings might also be a consequence of what students can access in academic literature in the first place independent of a positive, neutral or negative tone of coverage. A recent study (Wolbring 2021) investigated the engagement of the academic literature focusing on AI/ML and other technologies with over 21 well-being measures finding that of the 353,233 abstracts that contained the terms artificial intelligence or machine learning none covered 14 of the 21 measures, 5 of the measures were mentioned in 5 or fewer abstracts, the phrase “social determinants of health” was present in 41 abstracts and the phrase “determinants of health” in 53 abstracts. Furthermore, the study (Wolbring 2021) found a very uneven coverage of the individual indicators of the measures (a) The Better Life Index, ( b) The Canadian Index of Wellbeing, (c) The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix and (d) The Social Determinants of Health (SDH) we gave our participants with few sources containing terms such as “social norms”, ‘social status”, “personal well-being”, “living standard”, and many others that could be used to discuss and trigger thinking about the impact of AI/ML on a good life.
Our techno-positive and techno-optimistic findings in relation to disabled people fits with and might be a consequence of AI being mostly mentioned within the AI and ML focused academic literature but also newspapers and Twitter tweets in relation to disabled people in a techno-positive and techno-optimistic way (Lillywhite and Wolbring 2020) and that disabled people are for the most part mentioned in the same literature with the imagery of the patient or benefiting user (Lillywhite and Wolbring 2019, 2020). A techno-positive and techno-optimistic tone does not lend itself to be nudged towards thinking about negative social implications for disabled people such as their ability of a good life, a possibility we know exists (Diep and Wolbring 2013, 2015; Lillywhite and Wolbring 2020; Nierling et al. 2018; Wolbring and Diep 2016; Yumakulov et al. 2012). However, this techno-positive and techno-optimism in our study was not limited to disabled people and as such our findings could be a consequence of a generally techno-positive and techno-optimistic exposure to AI/ML in the education of the participants or other sources through which they are informed on AI/ML. Interestingly, 42.31% indicated that there is no impact of AI/ML on gender (Table 5), which is surprising given that Amazon, for example, had to stop their Human resource AI due to a bias against women (Reuters 2018) and that over 90% of our participants were women.
5.2 Adding indicators for future studies
Many of the items our participants flagged as essential for living a good life are covered by the composite measures, we gave to the participants such as basic needs including food and water, forming relationships, social interactions, support system, financial stability, socioeconomic status, and various forms of health. Other social and ethical issues mentioned in the AI/ML literature could be added as indicators of the ability to have a good life, such as freedom of thought, to live out one’s purpose, mindset, contentment, being happy, privacy, data protection, technological deskilling (Vesnic-Alujevic et al. 2020), solidarity (European Group on Ethics in Science and New Technologies 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019), equity and equality (European Group on Ethics in Science and New Technologies 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019; Yuste et al. 2017), respecting (Malti et al. 2020), being respected (Steckermeier and Delhey 2019), dignity, health equity (Wolbring 2021), ethnic, gender, and social bias (Allen and Dreyer 2019; Pham et al. 2021; Straw 2020; Tat et al. 2020; Walsh 2019; Weissglass 2021), and various types of well-being that are noted to be impacted by AI/ML such as emotional well-being (Borjas and Freeman 2019; Fratczak et al. 2019; Khosla and Chu 2013; West 2018), sense of well-being and identity (Abeles 2016), economic well-being (Borjas and Freeman 2019; Fratczak et al. 2019; Press 1982; Ullrich et al. 2016; West 2018) and societal well-being (Reddy 2006). Making one big list of the indicators allows for obtaining insight into the views of participants on the impact of AI/ML on the ability to have a good life and for that matter many other technologies.
6 Conclusion and future research opportunities
In the report Ethically aligned design A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019), it is stated
“To be able to contribute in a positive, non-dogmatic way, we, the techno-scientific communities, need to enhance our self-reflection. We need to have an open and honest debate around our explicit or implicit values, including our imagination around so-called “Artificial Intelligence” and the institutions, symbols, and representations it generates” (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019, p. 1).
The report furthermore argues that “ethical design of Autonomous and Intelligent Systems (A/IS) has to have provable improvements to societal well-being and that discussions on and mitigation of risks of potential negative long term effects on societal well-being are needed” (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019, p. 70). Our questions might be useful for the sentiment voiced in the report.
Our survey might be useful to achieve various goals from Table 1 of Tomblin and Mogul (2020) such as, “to go beyond technical narrowness of STEM education and embrace reflexive, critical systems thinking”, “cultivate social justice mindsets among STEM students who are yearning for this and may leave STEM in search of it”, “encourage students to become reflexive, empathetic data collectors who ask relevant STS questions of their work” and “create agents of change that explore alternative pathways for science and technology” (Tomblin and Mogul 2020, p. S120).
Our study suggests techno-positive and techno-optimistic sentiment of the students in relation to certain groups and the indicators of the ability to have a good life, but more studies are needed to compare our data with other sets of participants. Indeed, as a study showed that the teaching of AI in technical studies is more positive than in humanistic studies (Gherheș and Obrad 2018, Table 5, p. 8) and there was a more techno-optimistic sentiment towards social impact of AI in technical versus humanistic studies (Gherheș and Obrad 2018, Table 10, p. 10) it might be worthwhile to see whether students from humanistic studies are answering the questions differently.
It might also be beneficial to perform interviews instead of the online survey. One could ask participants to answer the same questions we did but with the opportunity for asking follow-up questions. For example, given participants answers in the “no impact” option in Tables 1, 2, 3, 4, 5 and 6 in our study, it would be useful to give the tables to participants to answer at the beginning of the interview and then ask participants for more clarifications related to the “no impact”. One could also focus on the answers to specific indicators of Tables 3, 4, 5 and 6 and ask participants for clarifications.
As to questions related to Tables 1 and 2, one could add more social groups to choose from and one could generate intersectional social groups. One could also be more differentiated with some of the social groups we used such as depicting different ethnic groups including Indigenous People instead of ethnic groups as one category. As for disabled people, one could differentiate based on why they are labeled as a disabled person as one can expect AI/ML visions to impact disabled people with different characteristics in different ways. It would be interesting how participants judge the impact on various groups of disabled people. It is recognized that data needs to disaggregate for different groups of disabled people (Bureau of Labor Statistics United States Department of Labor (USA) 2020; International Disability Alliance (IDA) 2017; Washington Group on Disability Statistics 2020; Wolbring and Lillywhite 2021). Given that various academic degrees and programs focus on different social groups, one could give the questions linked to Table 1 and 2 and the answer options to students of various degrees and programs to see whether the answers are different; for example, would students in women studies and disability studies answer the questions differently in general and especially in relation to women and disabled people respectively. One could design a study which would ensure a more even gender distribution so one could compare for example whether male and females fill out the Tables 1 and 2 differently.
As to the questions related to Tables 3, 4, 5 and 6, one could use all the indicators but also use different groups of participants or the same so STEM students but another group of STEM students and see whether the key trajectories are the same. One could design a study which would ensure a more even gender distribution so one could compare for example whether male and females fill out the Tables 3, 4, 5 and 6 differently. One could also ask participants to answer Tables 3, 4, 5 and 6 for different social groups to see whether indicator-specific differences appear based on social groups one uses as a lens.
Given that there are communities of academics, policymakers, practitioners, and others actively linked to The Better Life Index (OECD 2020), The Canadian Index of Wellbeing (Canadian Index of Wellbeing Organization 2019), The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix (World Health Organization 2011), and The Social Determinants of Health (SDH) (Raphael et al. 2020; World Health Organization 2020), studies could be done with our questions within these communities to engage with the impact of AI/ML on the ability to have a good life. Our survey questions could be given to students in course segments that cover these measures and indicators, and the surveys could be used in AI/ML and STEM education to ascertain the student’s perceptions of the impact now and in the future of AI/ML.
One could also add indicators to the question list based on existing AI/ML and other relevant literature. Instead of giving four different sets based on existing composite measures, one could simply do one table with a list of primary and secondary indicators and with that add indicators do not present in the list we used. One could also use other composite measures that exist such as “The Disability and Wellbeing Monitoring Framework” (Fortune et al. 2020).
Our study was the first to our knowledge to engage STEM students by asking students about abilities they see as essential for having a good life and linking it to the social impact of AI/ML using well-being indicators. We used the indicators of well-being to make the ability to experience a good life more real for participants. There are many ability-related concepts in ability studies, such as ability security, ability identity security, ability expectation oppression, ability privilege, ability discrimination, ability inequity, ability inequality, and ability expectation creep (Wolbring 2020; Wolbring and Ghai 2015), that could be used to make the linkage between the well-being indicators and AI/ML impacts on a good life clearer.
Data availability statement
No data access provided for reviewers beyond what is in the main document. Auditing the impact of artificial intelligence on the ability to have a good life: using well-being measures as a tool to investigate the views of undergraduate STEM students.
References
Abeles TP (2016) Send in the robots. On the Horizon 24(2):141–144. https://doi.org/10.1108/OTH-07-2015-0031
Allen B, Dreyer K (2019) The role of the ACR data science institute in advancing health equity in radiology. J Am Coll Radiol 16(4):644–648. https://doi.org/10.1016/j.jacr.2018.12.038
Aluaş M, Bolboacă SD (2019) Is the biggest problem of health-related artificial intelligence an ethical one? Appl Med Inf 41:3–3
Beckman L (2018) The liberal state and the politics of virtue. https://doi.org/10.4324/9781351325448
Bennett D, Knight E, Bawa S, Dockery AM (2021) Understanding the career decision making of university students enrolled in STEM disciplines. Aust J Career Dev 30(2):95–105
Borjas GJ, Freeman RB (2019) From immigrants to robots: the changing locus of substitutes for workers. RSF 5(5):22–42. https://doi.org/10.7758/rsf.2019.5.5.02.pdf
Børsen T, Serreau Y, Reifschneider K, Baier A, Pinkelman R, Smetanina T, Zandvoort H (2021) Initiatives, experiences and best practices for teaching social and ecological responsibility in ethics education for science and engineering students. EJEE 46(2):186–209
Braun V, Clarke V (2013) Successful qualitative research: a practical guide for beginners. Sage
Brusdal R, Frønes I (2014) Well-being and children in a consumer society. In: Handbook of child well-being: theories, methods and policies in global perspective, pp 1427–1443. https://doi.org/10.1007/978-90-481-9063-8_58
Buhmann A, Fieseler C (2021) Towards a deliberative framework for responsible innovation in artificial intelligence. Technol Soc 64:101475
Bureau of Labor Statistics United States Department of Labor (USA) (2020) The employment situation—February 2020. Bureau of Labor Statistics, United States Department of Labor. https://www.bls.gov/news.release/pdf/empsit.pdf. Accessed 26 Dec 2021
Burks G, Clancy KB, Hunter CD, Amos JR (2019) Impact of ethics and social awareness curriculum on the engineering identity formation of high school girls. Educ Sci 9(4):250
Canadian Index of Wellbeing Organization (2019) What is Wellbeing? Canadian Index of Wellbeing. https://uwaterloo.ca/canadian-index-wellbeing/what-wellbeing. Accessed 26 Dec 2021
Canadian Institute for Advanced Research (CIFAR) (2018) AI & Society. Canadian Institute for Advanced Research. https://www.cifar.ca/ai/ai-society. Accessed 26 Dec 2021
Canney NE, Bielefeldt AR (2015) Differences in engineering students’ views of social responsibility between disciplines. J Profession Issues Eng Educ Pract. https://doi.org/10.1061/(ASCE)EI.1943-5541.0000248
Chiu TK, Meng H, Chai C-S, King I, Wong S, Yam Y (2021) Creation and evaluation of a pretertiary artificial intelligence (AI) curriculum. https://arxiv.org/abs/2101.07570. Accessed 26 Dec 2021
Clarke V, Braun V (2014) Thematic analysis. In: Teo T (ed) Encyclopedia of critical psychology. Springer, pp 1947–1952
Coeckelbergh M (2019) Artificial Intelligence: Some ethical issues and regulatory challenges. Technology and Regulation, 2019 pp 31–34. https://doi.org/10.26116/techreg.2019.003
Collett C, Dillon S (2019) AI and gender: four proposals for future research. The Leverhulme Centre for the Future of Intelligence, University of Cambridge. http://lcfi.ac.uk/media/uploads/files/AI_and_Gender___4_Proposals_for_Future_Research_210619_p8qAu8L.pdf. Accessed 26 Dec 2021
Colombo M (2014) Caring, the emotions, and social norm compliance. J Neurosci Psychol Econ 7(1):33–47. https://doi.org/10.1037/npe0000015
Cormier D, Jandrić P, Childs M, Hall R, White D, Phipps L, Truelove I, Hayes S, Fawns T (2019) Ten years of the postdigital in the 52group: reflections and developments 2009–2019. Postdigital Science and Education 1:475–506
Crow SM, Payne D (1992) Affirmative action for a face only a mother could love? J Bus Ethics 11(11):869–875. https://doi.org/10.1007/BF00872366
Diep L, Wolbring G (2013) Who needs to fit in? Who Gets to stand out? Communication Technologies Including Brain-Machine Interfaces Revealed from the Perspectives of Special Education School Teachers Through an Ableism Lens. Edu Sci 3(1):30–49
Diep L, Wolbring G (2015) Perceptions of Brain-Machine Interface Technology among Mothers of Disabled Children. Disabil Stud Q. https://doi.org/10.18061/.v35i4.3856
Elgin, S. C., Hays, S., Mingo, V., Shaffer, C. D., & Williams, J. (2021). Building Back More Equitable STEM Education: Teach Science by Engaging Students in Doing Science. bioRxiv. Accessed 26 December 2021
European Group on Ethics in Science and New Technologies (2018) Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems. European Commission. https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdfCommission. Accessed 26 Dec 2021
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Mind Masch. https://doi.org/10.31235/osf.io/2hfsc
Fortune N, Badland H, Clifton S, Emerson E, Rachele J, Stancliffe RJ, Zhou Q, Llewellyn G (2020) The Disability and Wellbeing Monitoring Framework: data, data gaps, and policy implications. Aust N Z J Public Health 44(3):227–232
Fratczak P, Goh Y M, Kinnell P, Soltoggio A, Justham L (2019) Understanding human behaviour in industrial human-robot interaction by means of virtual reality. ACM International Conference Proceeding Series. November 2019 Article No.: 19, pp 1–7. https://doi.org/10.1145/3363384.3363403
Furey H, Martin F (2019) AI education matters: a modular approach to AI ethics education. AI Matters 4(4):13–15
Garcia P, Scott K (2016) Traversing a political pipeline: An intersectional and social constructionist approach toward technology education for girls of color. stelar.edc.org. http://stelar.edc.org/sites/stelar.edc.org/files/Garcia%20%26%20Scott%202016.pdf. Accessed 26 Dec 2021
Garibay JC (2015) STEM students’ social agency and views on working for social change: Are STEM disciplines developing socially and civically responsible students? JRScT 52(5):610–632
Garrett N, Beard N, Fiesler C (2020) More than "If Time Allows" the role of ethics in AI education. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society pp 272–278. https://doi.org/10.1145/3375627.3375868
Gehl L, Ross H (2013) Disenfranchised spirit: a theory and a model. Pimatisiwin 11(1):31–42. https://ezproxy.lib.ucalgary.ca/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=fph&AN=91533986&site=ehost-live
Gherheș V, Obrad C (2018) Technical and humanities students’ perspectives on the development and sustainability of artificial intelligence (AI). Sustainability 10(9):3066
Greenbie BB (1969) Reports and comments. Land Econ 45(3):359
Guba EG (1981) Criteria for assessing the trustworthiness of naturalistic inquiries. Educ Tech Res Dev 29(2):75–91
Hager GD, Drobnis A, Fang F, Ghani R, Greenwald A, Lyons T, Parkes DC, Schultz J, Saria S, Smith SF (2019) Artificial intelligence for social good. arXiv.org. https://arxiv.org/ftp/arxiv/papers/1901/1901.05406.pdf. Accessed 26 Dec 2021
Hansen KB (2015) Exploring compatibility between “Subjective Well-Being” and “Sustainable Living” in Scandinavia. Soc Indic Res 122(1):175–187. https://doi.org/10.1007/s11205-014-0684-9
Heintz F (2021) Three Interviews About K-12 AI Education in America, Europe and Singapore. KI - Künstliche Intelligenz: Vol. 35, No. 2. Springer. (S. 233-237). https://doi.org/10.1007/s13218-021-00730-w
Holmström IK, Kaminsky E, Höglund AT, Carlsson M (2017) Nursing students’ awareness of inequity in healthcare—an intersectional perspective. Nurse Educ Today 48:134–139. https://doi.org/10.1016/j.nedt.2016.10.009
Hsieh H-F, Shannon SE (2005) Three approaches to qualitative content analysis. Qual Health Res 15(9):1277–1288
International Disability Alliance (IDA) (2017) Joint statement by the disability sector: disability data disaggregation. International Disability Alliance (IDA). https://www.internationaldisabilityalliance.org/data-joint-statement-march2017. Accessed 26 Dec 2021
Jeffrey T (2020) Understanding College Student Perceptions of Artificial Intelligence. International Institute of Informatics and Cybernetics. http://www.iiisci.org/journal/PDV/sci/pdfs/HB785NN20.pdf. Accessed 26 Dec 2021
Johnson K (2013) The UN convention on the rights of persons with disabilities: a framework for ethical and inclusive practice? Ethics Soc Welf 7(3):218–231. https://doi.org/10.1080/17496535.2013.815791
Josa I, Aguado A (2021) Social sciences and humanities in the education of civil engineers: Current status and proposal of guidelines. J Clean Prod 311:127489. https://doi.org/10.1016/j.jclepro.2021.127489
Kakoullis E, Johnson K (2020) Conclusion recognising human rights in different cultural contexts. In: KE, JK (eds) Recognising human rights in different cultural contexts. Palgrave Macmillan, pp 377–385, https://doi.org/10.1007/978-981-15-0786-1_17
Kelley TR, Knowles JG (2016) A conceptual framework for integrated STEM education. Int J STEM Educ 3(1):1–11
Khosla R, Chu MT (2013) Embodying care in matilda: An affective communication robot for emotional wellbeing of older people in Australian residential care facilities. ACM Trans Manag Inf Syst 4(4):18. https://doi.org/10.1145/2544104
Korkmaz Ö, Çakir R, Erdoğmuş FU (2021) Secondary school students’ basic STEM skill levels according to their self-perceptions: a scale adaptation. Particip Educ Res 8(1):423–437
Kutsar D, Soo K, Strózik T, Strózik D, Grigoraș B, Bălțătescu S (2019) Does the realisation of children’s rights determine good life in 8-year-olds’ perspectives? A comparison of Eight European Countries. Child Indic Res 12(1):161–183. https://doi.org/10.1007/s12187-017-9499-y
Lillywhite A, Wolbring G (2019) Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people. Assist Technol. https://doi.org/10.1080/10400435.2019.1593259. (Latest Articles 1-7)
Lillywhite A, Wolbring G (2020) Coverage of artificial intelligence and machine learning within academic literature, canadian newspapers, and twitter tweets: the case of disabled people. Societies 10(1):1–27. https://doi.org/10.3390/soc10010023
Long D, Magerko B (2020) What is AI literacy? Competencies and design considerations. In: Proceedings of the 2020 CHI Conference on Human factors in computing systems. pp 1–16. https://doi.org/10.1145/3313831.3376727
Malti T, Peplak J, Zhang L (2020) The development of respect in children and adolescents. Monogr Soc Res Child Dev 85(3):7–99. https://doi.org/10.1111/mono.12417
National Academies of Sciences (2018) The frontiers of machine learning: 2017 raymond and beverly sackler US-UK Scientific Forum, vol 16. National Academies of Sciences Engineering, Medicine. https://doi.org/10.17226/25021
Neuwelt-Kearns C, Nicholls A, Deane KL, Robinson H, Lowe D, Pope R, Goddard T, van der Schaaf M, Bartley A (2021) The realities and aspirations of people experiencing food insecurity in Tāmaki Makaurau. Kotuitui. https://doi.org/10.1080/1177083X.2021.1951779
Nierling L, João-Maia M, Hennen L, Bratan T, Kuuk P, Cas J, Capari L, Krieger-Lamina J, Mordini E, Wolbring G (2018) Assistive technologies for people with disabilities Part III: Perspectives on assistive technologies. European Parliament. http://www.europarl.europa.eu/RegData/etudes/IDAN/2018/603218/EPRS_IDA(2018)603218(ANN3)_EN.pdf. Accessed 26 Dec 2021
OECD (2020) OECD Better Life Index. http://www.oecdbetterlifeindex.org/#/11111111111. Accessed 26 Dec 2021
Pham Q, Gamble A, Hearn J, Cafazzo JA (2021) The need for ethnoracial equity in artificial intelligence for diabetes management: review and recommendations [Review]. J Med Internet Res 23(2):e22320. https://doi.org/10.2196/22320
Press F (1982) Science and Technology in the 1980s. Trans R Soc Canada Ottawa 20:105–116
Raji ID, Scheuerman MK, Amironesei R (2021) You can't sit with us: exclusionary pedagogy in AI ethics education. In: Proceedings of the 2021 ACM Conference on fairness accountability and transparency, pp 515–525
Ramirez Velazquez M (2021) Not Just Teaching How: Supporting a Culture Shift in STEM Education. Bryn Mawr College. https://scholarship.tricolib.brynmawr.edu/handle/10066/23046. Accessed 26 Dec 2021
Raphael D, Bryant T, Mikkonen J, Raphael A (2020) Social Determinants of Health: The Canadian Facts https://thecanadianfacts.org/. Accessed 26 Dec 2021
Reddy R (2006) Robotics and intelligent systems in support of society [Review]. IEEE Intell Syst 21(3):24–31. https://doi.org/10.1109/MIS.2006.57. (Article 1637347)
Reuters (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G. Accessed 26 Dec 2021
Rodriguez-Nikl T (2021) Technology uncertainty and the good life: a stoic perspective. In: Pirtle Z, Tomblin D, Madhavan G (eds) Engineering and Philosophy. Philosophy of Engineering and Technology, vol 37. Springer, Cham. https://doi.org/10.1007/978-3-030-70099-7_11
Sandelowski M (2000) Combining qualitative and quantitative sampling, data collection, and analysis techniques in mixed-method studies. Res Nurs Health 23(3):246–255
Schiff DS, Logevall E, Borenstein J, Newstetter W, Potts C, Zegura E (2021) Linking personal and professional social responsibility development to microethics and macroethics: observations from early undergraduate education. J Eng Educ 110(1):70–91
Steckermeier LC, Delhey J (2019) Better for everyone? Egalitarian culture and social wellbeing in Europe. Soc Indic Res 143(3):1075–1108. https://doi.org/10.1007/s11205-018-2007-z
Steinbauer G, Kandlhofer M, Chklovski T, Heintz F, Koenig S (2021) A differentiated discussion about AI education K-12. KI-Künstliche Intell 35:1–7
Strachan G (2010) Still working for the man? Women’s employment experiences in Australia since 1950. Aust J Soc Issues 45(1):117–130. https://doi.org/10.1002/j.1839-4655.2010.tb00167.x
Straw I (2020) The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future. Artif Intell Med 110:101965. https://doi.org/10.1016/j.artmed.2020.101965
Tat E, Bhatt DL, Rabbat MG (2020) Addressing bias: artificial intelligence in cardiovascular medicine [Note]. Lancet Digit Health 2(12):e635–e636. https://doi.org/10.1016/S2589-7500(20)30249-1
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) ETHICALLY ALIGNED DESIGN A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf?utm_medium=undefined&utm_source=undefined&utm_campaign=undefined&utm_content=undefined&utm_term=undefined.. Accessed 26 Dec 2021
Tomblin D, Mogul N (2020) STS Postures: responsible innovation and research in undergraduate STEM education. J Responsib Innov 7(sup1):117–127
Touretzky D, Gardner-McCune C, Breazeal C, Martin F, Seehorn D (2019) A year in K-12 AI education. AI Mag 40(4):88–90
Ullrich D, Diefenbach S, Butz A (2016. Murphy Miserable robot—a companion to support children's wellbeing in emotionally difficult situations. In: Conference on Human Factors in Computing Systems—Proceedings
Vallor S (2016) Introduction: envisioning the good life In the 21st century and beyond. Santa Clara University. https://scholarcommons.scu.edu/cgi/viewcontent.cgi?article=1060&context=phi. Accessed 26 Dec 2021
Vesnic-Alujevic L, Nascimento S, Polvora A (2020) Societal and ethical impacts of artificial intelligence: critical notes on European policy frameworks. Telecommun Pol 44(6):101961. https://doi.org/10.1016/j.telpol.2020.101961
Vigdor L (2011) A techno-passion that is not one: Rethinking marginality, exclusion, and difference. Int J Gend Sci Technol 3(1):4–37
Walsh T (2019) Australia’s AI future. RSNSW 152(Part 1):101–104
Washington Group on Disability Statistics (2020) Disability Measurement and Monitoring using the Washington Group Disability Questions. Washington Group on Disability Statistics. https://www.washingtongroup-disability.com/fileadmin/uploads/wg/Documents/WG_Resource_Document__4_-_Monitoring_Using_the_WG_Questions.pdf. Accessed 26 Dec 2021
Weissglass DE (2021) Contextual bias, the democratization of healthcare, and medical artificial intelligence in low- and middle-income countries. Bioethics. https://doi.org/10.1111/bioe.12927
West DM (2018) The future of work: Robots, AI, and automation. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85055018914&partnerID=40&md5=86237b8da3f84fe6de1db9c2905619b2
Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf. Accessed 26 Dec 2021
Wolbring G (2020) Ability expectation and Ableism glossary. Wordpress. https://wolbring.wordpress.com/ability-expectationableism-glossary/. Accessed 26 Dec 2021
Wolbring G (2021) Auditing the impact of neuro-advancements on health equity. J Neurol Res. https://doi.org/10.14740/jnr695
Wolbring G, Diep L (2016) Cognitive/neuroenhancement through an ability studies lens. In: Jotterand F, Dubljevic V (eds) Cognitive enhancement. Oxford University Press, pp 57–75
Wolbring G, Ghai A (2015) Interrogating the impact of scientific and technological development on disabled children in India and beyond. Disabil Glob South 2(2):667–685
Wolbring G, Lillywhite A (2021) Equity/equality, diversity, and inclusion (EDI) in universities: the case of disabled people. Societies 11(2):49. https://doi.org/10.3390/soc11020049
World Health Organization (2011) About the community-based rehabilitation (CBR) matrix. World Health Organization. http://www.who.int/disabilities/cbr/matrix/en/. Accessed 26 Dec 2021
World Health Organization (2020) Social determinants of health. World Health Organization. https://www.who.int/social_determinants/en/. Accessed 26 Dec 2021
Xu X, Zhao Y, Xia S, Cui P, Tang W, Hu X, Wu B (2020) Quality of life and its influencing factors among centenarians in Nanjing, China: a cross-sectional study. Soc Indic Res. https://doi.org/10.1007/s11205-020-02399-4
Yumakulov S, Yergens D, Wolbring G (2012) Imagery of disabled people within social robotics research. In: Ge S, Khatib O, Cabibihan J-J, Simmons R, Williams M-A (eds) Social robotics, vol 7621. Springer, pp 168–177. https://doi.org/10.1007/978-3-642-34103-8_17
Yuste R, Goering S, Bi G, Carmena JM, Carter A, Fins JJ, Friesen P, Gallant J, Huggins JE, Illes J (2017) Four ethical priorities for neurotechnologies and AI. Nat News 551(7679):159
Acknowledgements
We would like to thank the students that gave their precious time to take part in our study.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all the authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lillywhite, B., Wolbring, G. Auditing the impact of artificial intelligence on the ability to have a good life: using well-being measures as a tool to investigate the views of undergraduate STEM students. AI & Soc (2023). https://doi.org/10.1007/s00146-022-01618-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-022-01618-5