Philosophy of Artificial Intelligence

Edited by Eric Dietrich (State University of New York at Binghamton)
Assistant editor: Michelle Thomas (University of Western Ontario)
About this topic
Summary

The philosophy of artificial intelligence is a collection of issues primarily concerned with whether or not AI is possible -- with whether or not it is possible to build an intelligent thinking machine.  Also of concern is whether humans and other animals are best thought of as machines (computational robots, say) themselves. The most important of the "whether-possible" problems lie at the intersection of theories of the semantic contents of thought and the nature of computation. A second suite of problems surrounds the nature of rationality. A third suite revolves around the seeming “transcendent” reasoning powers of the human mind. These problems derive from Kurt Gödel's famous Incompleteness Theorem.  A fourth collection of problems concerns the architecture of an intelligent machine.  Should a thinking computer use discrete or continuous modes of computing and representing, is having a body necessary, and is being conscious necessary.  This takes us to the final set of questions. Can a computer be conscious?  Can a computer have a moral sense? Would we have duties to thinking computers, to robots?  For example, is it moral for humans to even attempt to build an intelligent machine?  If we did build such a machine, would turning it off be the equivalent of murder?  If we had a race of such machines, would it be immoral to force them to work for us?

Key works Probably the most important attack on whether AI is possible is John Searle's famous Chinese Room Argument: Searle 1980.  This attack focuses on the semantic aspects (mental semantics) of thoughts, thinking, and computing.   For some replies to this argument, see the same 1980 journal issue as Searle's original paper.  For the problem of the nature of rationality, see Pylyshyn 1987.  An especially strong attack on AI from this angle is Jerry Fodor's work on the frame problem: Fodor 1987.  On the frame problem in general, see McCarthy & Hayes 1969.  For some replies to Fodor and advances on the frame problem, see Ford & Pylyshyn 1996.  For the transcendent reasoning issue, a central and important paper is Hilary Putnam's Putnam 1960.  This paper is arguably the source for the computational turn in 1960s-70s philosophy of mind.  For architecture-of-mind issues, see, for starters: M. Spivey's The Contintuity of Mind, Oxford, which argues against the notion of discrete representations. See also, Gelder & Port 1995.  For an argument for discrete representations, see, Dietrich & Markman 2003.  For an argument that the mind's boundaries do not end at the body's boundaries, see, Clark & Chalmers 1998.  For a statement of and argument for computationalism -- the thesis that the mind is a kind of computer -- see Shimon Edelman's excellent book Edelman 2008. See also Chapter 9 of Chalmers's book Chalmers 1996.
Introductions Chinese Room Argument: Searle 1980. Frame problem: Fodor 1987, Computationalism and Godelian style refutation: Putnam 1960. Architecture: M. Spivey's The Contintuity of Mind, Oxford and Shimon Edelman's Edelman 2008. Ethical issues: Anderson & Anderson 2011 and Müller 2020.  Conscious computers: Chalmers 2011.
Related
Subcategories

Contents
13923 found
Order:
1 — 50 / 13923
Material to categorize
  1. Willingness of sharing facial data for emotion recognition: a case study in the insurance market.Giulio Mangano, Andrea Ferrari, Carlo Rafele, Enrico Vezzetti & Federica Marcolin - forthcoming - AI and Society:1-12.
    The research on technologies and methodologies for (accurate, real-time, spontaneous, three-dimensional…) facial expression recognition is ongoing and has been fostered in the past decades by advances in classification algorithms like deep learning, which makes them part of the Artificial Intelligence literature. Still, despite its upcoming application to contexts such as human–computer interaction, product and service design, and marketing, only a few literature studies have investigated the willingness of end users to share their facial data with the purpose of detecting emotions. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Developing Artificial Human-Like Arithmetical Intelligence (and Why).Markus Pantsar - forthcoming - Minds and Machines:1-18.
    Why would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies could potentially shed (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - forthcoming - AI and Society:1-17.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. The system of autono‑mobility: computer vision and urban complexity—reflections on artificial intelligence at urban scale.Fabio Iapaolo - 2023 - AI and Society 38 (3):1111-1122.
    Focused on city-scale automation, and using self-driving cars (SDCs) as a case study, this article reflects on the role of AI—and in particular, computer vision systems used for mapping and navigation—as a catalyst for urban transformation. Urban research commonly presents AI and cities as having a one-way cause-and-effect relationship, giving undue weight to AI’s impact on cities and overlooking the role of cities in shaping AI. Working at the intersection of data science and social research, this paper aims to counter (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. Human–machine coordination in mixed traffic as a problem of Meaningful Human Control.Giulio Mecacci, Simeon C. Calvert & Filippo Santoni de Sio - 2023 - AI and Society 38 (3):1151-1166.
    The urban traffic environment is characterized by the presence of a highly differentiated pool of users, including vulnerable ones. This makes vehicle automation particularly difficult to implement, as a safe coordination among those users is hard to achieve in such an open scenario. Different strategies have been proposed to address these coordination issues, but all of them have been found to be costly for they negatively affect a range of human values (e.g. safety, democracy, accountability…). In this paper, we claim (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. Urban-semantic computer vision: a framework for contextual understanding of people in urban spaces.Anthony Vanky & Ri Le - 2023 - AI and Society 38 (3):1193-1207.
    Increasing computational power and improving deep learning methods have made computer vision technologies pervasively common in urban environments. Their applications in policing, traffic management, and documenting public spaces are increasingly common (Ridgeway 2018, Coifman et al. 1998, Sun et al. 2020). Despite the often-discussed biases in the algorithms' training and unequally borne benefits (Khosla et al. 2012), almost all applications similarly reduce urban experiences to simplistic, reductive, and mechanistic measures. There is a lack of context, depth, and specificity in these (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Advancing residents’ use of shared spaces in Nordic superblocks with intelligent technologies.Jouko Makkonen, Rita Latikka, Laura Kaukonen, Markus Laine & Kaisa Väänänen - 2023 - AI and Society 38 (3):1167-1184.
    To support the sustainability of future cities, residents’ living spaces need to be built and used efficiently, while supporting residents’ communal wellbeing. Nordic superblock is a new planning, housing, and living concept in which residents of a neighborhood—a combination of city blocks—share yards, common spaces and utilities. Sharing living spaces is an essential element of this approach. In this study, our goal was to study the ways in which intelligent technology solutions—such as proactive, data-driven Artificial Intelligence (AI) applications—could support and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. Street surface condition of wealthy and poor neighborhoods: the case of Los Angeles.Pooyan Doozandeh, Limeng Cui & Rui Yu - 2023 - AI and Society 38 (3):1185-1192.
    Are wealthy neighborhoods visually more attractive than poorer neighborhoods? Past studies provided a positive answer to this question for characteristics such as green space and visible pollution. The condition of streets is one of the characteristics that can not only contribute to neighborhoods’ aesthetics, but can also affect residents’ health and mobility. In this study, we investigate whether street condition of wealthy neighborhoods is different from poorer neighborhoods. We resolved the difficulty of data collection using a dataset that utilized artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. Understanding citizen perceptions of AI in the smart city.Anu Lehtiö, Maria Hartikainen, Saara Ala-Luopa, Thomas Olsson & Kaisa Väänänen - 2023 - AI and Society 38 (3):1123-1134.
    Artificial intelligence (AI) is embedded in a wide variety of Smart City applications and infrastructures, often without the citizens being aware of the nature of their “intelligence”. AI can affect citizens’ lives concretely, and thus, there may be uncertainty, concerns, or even fears related to AI. To build acceptable futures of Smart Cities with AI-enabled functionalities, the Human-Centered AI (HCAI) approach offers a relevant framework for understanding citizen perceptions. However, only a few studies have focused on clarifying the citizen perceptions (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Katherine Crawford: Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.Aale Luusua - 2023 - AI and Society 38 (3):1257-1259.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Urban AI: understanding the emerging role of artificial intelligence in smart cities.Aale Luusua, Johanna Ylipulli, Marcus Foth & Alessandro Aurigi - 2023 - AI and Society 38 (3):1039-1044.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. The Polyopticon: a diagram for urban artificial intelligences.Stephanie Sherman - 2023 - AI and Society 38 (3):1209-1222.
    Smart city discourses often invoke the Panopticon, a disciplinary architecture designed by Jeremy Bentham and popularly theorized by Michel Foucault, as a model for understanding the social impact of AI technologies. This framing focuses attention almost exclusively on the negative ramifications of Urban AI, correlating ubiquitous surveillance, centralization, and data consolidation with AI development, and positioning technologies themselves as the driving factor shaping privacy, sociality, equity, access, and autonomy in the city. This paper describes an alternative diagram for Urban AI—the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13. Cyclists and autonomous vehicles at odds.Alexander Gaio & Federico Cugurullo - 2023 - AI and Society 38 (3):1223-1237.
    Consequential historical decisions that shaped transportation systems and their influence on society have many valuable lessons. The decisions we learn from and choose to make going forward will play a key role in shaping the mobility landscape of the future. This is especially pertinent as artificial intelligence (AI) becomes more prevalent in the form of autonomous vehicles (AVs). Throughout urban history, there have been cyclical transport oppressions of previous-generation transportation methods to make way for novel transport methods. These cyclical oppressions (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. The emergence and evolution of urban AI.Michael Batty - 2023 - AI and Society 38 (3):1045-1048.
  15. Contestations in urban mobility: rights, risks, and responsibilities for Urban AI.Nitin Sawhney - 2023 - AI and Society 38 (3):1083-1098.
    Cities today are dynamic urban ecosystems with evolving physical, socio-cultural, and technological infrastructures. Many contestations arise from the effects of inequitable access and intersecting crises currently faced by cities, which may be amplified by the algorithmic and data-centric infrastructures being introduced in urban contexts. In this article, I argue for a critical lens into how inter-related urban technologies, big data and policies, constituted as Urban AI, offer both challenges and opportunities. I examine scenarios of contestations in _urban mobility_, defined broadly (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. Everyday data cultures: beyond Big Critique and the technological sublime.Jean Burgess - 2023 - AI and Society 38 (3):1243-1244.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Urban AI depends: the need for (wider) urban strategies.Alessandro Aurigi - 2023 - AI and Society 38 (3):1245-1247.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Federico Cugurullo (2021): Frankenstein Urbanism: Eco, Smart and Autonomous Cities, Artificial Intelligence and the End of the City.Johanna Ylipulli - 2023 - AI and Society 38 (3):1253-1255.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Assemblage thinking as a methodology for studying urban AI phenomena.Yu-Shan Tseng - 2023 - AI and Society 38 (3):1099-1110.
    This paper seeks to bypass assumptions that researchers in critical algorithmic studies and urban studies find it difficult to study algorithmic systems due to their black-boxed nature. In addition, it seeks to work against the assumption that advocating for transparency in algorithms is, therefore, the key for achieving an enhanced understanding of the role of algorithmic technologies on modern life. Drawing on applied assemblage thinking via the concept of the urban assemblage, I demonstrate how the notion of urban assemblage can (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Watch out! Cities as data engines.Fabio Duarte & Barbro Fröding - 2023 - AI and Society 38 (3):1249-1250.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Time to re-humanize algorithmic systems.Minna Ruckenstein - 2023 - AI and Society 38 (3):1241-1242.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22. We have to talk about emotional AI and crime.Lena Podoletz - 2023 - AI and Society 38 (3):1067-1082.
    Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states of people using data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. The performance of such systems is heavily debated and so are the underlying scientific methods that serve as the basis for many such technologies. In this article I will engage with this new technology, and with the debates and literature that surround it. Working at the intersection of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. Artificial intelligence in local governments: perceptions of city managers on prospects, constraints and choices.Tan Yigitcanlar, Duzgun Agdas & Kenan Degirmenci - 2023 - AI and Society 38 (3):1135-1150.
    Highly sophisticated capabilities of artificial intelligence (AI) have skyrocketed its popularity across many industry sectors globally. The public sector is one of these. Many cities around the world are trying to position themselves as leaders of urban innovation through the development and deployment of AI systems. Likewise, increasing numbers of local government agencies are attempting to utilise AI technologies in their operations to deliver policy and generate efficiencies in highly uncertain and complex urban environments. While the popularity of AI is (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Will AI end privacy? How do we avoid an Orwellian future.Toby Walsh - 2023 - AI and Society 38 (3):1239-1240.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Tensions in transparent urban AI: designing a smart electric vehicle charge point.Kars Alfrink, Ianus Keller, Neelke Doorn & Gerd Kortuem - 2023 - AI and Society 38 (3):1049-1065.
    The increasing use of artificial intelligence (AI) by public actors has led to a push for more transparency. Previous research has conceptualized AI transparency as knowledge that empowers citizens and experts to make informed choices about the use and governance of AI. Conversely, in this paper, we critically examine if transparency-as-knowledge is an appropriate concept for a public realm where private interests intersect with democratic concerns. We conduct a practice-based design research study in which we prototype and evaluate a transparent (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. All knowledge is not smart: racial and environmental injustices within legacies of smart cities.Hira Sheikh - 2023 - AI and Society 38 (3):1251-1252.
  27. Art, technology and the Internet of Living Things.Vibeke Sørensen & J. Stephen Lansing - forthcoming - AI and Society:1-17.
    Intelligence augmentation was one of the original goals of computing. Artificial Intelligence (AI) inherits this project and is at the leading edge of computing today. Computing can be considered an extension of brain and body, with mathematical prowess and logic fundamental to the infrastructure of computing. Multimedia computing—sensing, analyzing, and translating data to and from visual images, animation, sound and music, touch and haptics, as well as smell—is based on our human senses and is now commonplace. We use data visualization (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Rethinking “digital”: a genealogical enquiry into the meaning of digital and its impact on individuals and society.Luca Capone, Marta Rocchi & Marta Bertolaso - forthcoming - AI and Society:1-11.
    In the current social and technological scenario, the term digital is abundantly used with an apparently transparent and unambiguous meaning. This article aims to unveil the complexity of this concept, retracing its historical and cultural origin. This genealogical overview allows to understand the reason why an instrumental conception of digital media has prevailed, considering the digital as a mere tool to convey a message, as opposed to a constitutive conception. The constitutive conception places the digital phenomenon in the broader ground (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - forthcoming - Minds and Machines.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (2000) and Woodward (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. On Pearl's Hierarchy and the Foundations of Causal Inference.Elias Bareinboim, Juan Correa, Duligur Ibeling & Thomas Icard - 2022 - In Hector Geffner, Rina Dechter & Joseph Y. Halpern (eds.), Probabilistic and Causal Inference: the Works of Judea Pearl. ACM Books. pp. 507-556.
    Cause and effect relationships play a central role in how we perceive and make sense of the world around us, how we act upon it, and ultimately, how we understand ourselves. Almost two decades ago, computer scientist Judea Pearl made a breakthrough in understanding causality by discovering and systematically studying the “Ladder of Causation” [Pearl and Mackenzie 2018], a framework that highlights the distinct roles of seeing, doing, and imagining. In honor of this landmark discovery, we name this the Pearl (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Trust, understanding, and machine translation: the task of translation and the responsibility of the translator.Melvin Chen - forthcoming - AI and Society:1-13.
    Could translation be fully automated? We must first acknowledge the complexity, ambiguity, and diversity of natural languages. These aspects of natural languages, when combined with a particular dilemma known as the computational dilemma, appear to imply that the machine translator faces certain obstacles that a human translator has already managed to overcome. At the same time, science has not yet solved the problem of how human brains process natural languages and how human beings come to acquire natural language understanding. We (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Artificial intelligence as the new fire and its geopolitics.Manh-Tung Ho & Hong-Kong T. Nguyen - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Adopting AI: how familiarity breeds both trust and contempt.Michael C. Horowitz, Lauren Kahn, Julia Macdonald & Jacquelyn Schneider - forthcoming - AI and Society:1-15.
    Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into—and changes—societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Correction to: Robots as moral environments.Tomislav Furlanis, Takayuki Kanda & Dražen Brščić - forthcoming - AI and Society:1-1.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35. The Indian approach to Artificial Intelligence: an analysis of policy discussions, constitutional values, and regulation.P. R. Biju & O. Gayathri - forthcoming - AI and Society:1-15.
    India has produced several drafts of data policies. In this work, they are referred to [1] JBNSCR 2018, [2] DPDPR 2018, [3] NSAI 2018, [4] RAITF 2018, [5] PDPB 2019, [6] PRAI 2021, [7] JPCR 2021, [8] IDAUP 2022, [9] IDABNUP 2022. All of them consider Artificial Intelligence (AI) a social problem solver at the societal level, let alone an incentive for economic growth. However, these policy drafts warn of the social disruptions caused by algorithms and encourage the careful use (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Ethics of using artificial intelligence (AI) in veterinary medicine.Simon Coghlan & Thomas Quinn - 2023 - AI and Society:1-12.
    This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37. Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities.Sinead O’Connor & Helen Liu - forthcoming - AI and Society:1-13.
    Across the world, artificial intelligence (AI) technologies are being more widely employed in public sector decision-making and processes as a supposedly neutral and an efficient method for optimizing delivery of services. However, the deployment of these technologies has also prompted investigation into the potentially unanticipated consequences of their introduction, to both positive and negative ends. This paper chooses to focus specifically on the relationship between gender bias and AI, exploring claims of the neutrality of such technologies and how its understanding (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of the Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  39. No such thing as one-size-fits-all in AI ethics frameworks: a comparative case study.Vivian Qiang, Jimin Rhim & AJung Moon - forthcoming - AI and Society:1-20.
    Despite the bombardment of AI ethics frameworks (AIEFs) published in the last decade, it is unclear which of the many have been adopted in the industry. What is more, the sheer volume of AIEFs without a clear demonstration of their effectiveness makes it difficult for businesses to select which framework they should adopt. As a first step toward addressing this problem, we employed four different existing frameworks to assess AI ethics concerns of a real-world AI system. We compared the experience (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Pashmina authentication on imagery data using deep learning.Muzafar Rasool Bhat, Assif Assad, Ab Naffi Ahanger, Shabana Nargis Rasool & Abdul Basit Ahanger - forthcoming - AI and Society:1-9.
    Pashmina is one of the most luxurious and finest fibres in the world. It is a special kind of wool obtained from Cashmere goats. Counterfeiting Pashmina is becoming a prevalent malpractice because of limited supply, expensive pricing and high demand in western markets. Presently, there is a lack of a low-cost and easily available approach for distinguishing authentic Pashmina apparels from other similar-looking products. Because of technological advances and cost reductions in digital image processing, we have been able to implement (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. Can AI systems become wise? A note on artificial wisdom.Ana Sinha & Pooja Lakhanpal - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Ethical AI does not have to be like finding a black cat in a dark room.Apala Lahiri Chavan & Eric Schaffer - forthcoming - AI and Society:1-3.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. Editorial: Beyond regulatory ethics.Satinder P. Gill - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. The future of ethics in AI: challenges and opportunities.Angelo Trotta, Marta Ziosi & Vincenzo Lomonaco - 2023 - AI and Society 38 (2):439-441.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Evidence-based AI, ethics and the circular economy of knowledge.Caterina Berbenni-Rehm - 2023 - AI and Society 38 (2):889-895.
    Everything we do in life involves a connection with information, experience and know-how: together these represent the most valuable of intangible human assets encompassing our history, cultures and wisdom. However, the more easily new technologies gather information, the more we are confronted with our limited capacity to distinguish between what is essential, important or merely ‘nice-to-have’. This article presents the case study of a multilingual Knowledge Management System, the Business enabling e-Platform that gathers and protects tacit knowledge, as the key (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. A machine learning approach to recognize bias and discrimination in job advertisements.Richard Frissen, Kolawole John Adebayo & Rohan Nanda - 2023 - AI and Society 38 (2):1025-1038.
    In recent years, the work of organizations in the area of digitization has intensified significantly. This trend is also evident in the field of recruitment where job application tracking systems (ATS) have been developed to allow job advertisements to be published online. However, recent studies have shown that recruiting in most organizations is not inclusive, being subject to human biases and prejudices. Most discrimination activities appear early but subtly in the hiring process, for instance, exclusive phrasing in job advertisement discourages (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49. An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Training philosopher engineers for better AI.Brian Ball & Alexandros Koliousis - 2023 - AI and Society 38 (2):861-868.
    There is a deluge of AI-assisted decision-making systems, where our data serve as proxy to our actions, suggested by AI. The closer we investigate our data (raw input, or their learned representations, or the suggested actions), we begin to discover “bugs”. Outside of their test, controlled environments, AI systems may encounter situations investigated primarily by those in other disciplines, but experts in those fields are typically excluded from the design process and are only invited to attest to the ethical features (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 13923