Abstract
During 2018, as part of a research project funded by the Deviant Practice Grant, artist Bruno Moreschi and digital media researcher Gabriel Pereira worked with the Van Abbemuseum collection (Eindhoven, NL), reading their artworks through commercial image-recognition (computer vision) artificial intelligences from leading tech companies. The main takeaways were: somewhat as expected, AI is constructed through a capitalist and product-focused reading of the world (values that are embedded in this sociotechnical system); and that this process of using AI is an innovative way for doing institutional critique, as AI offers an untrained eye that reveals the inner workings of the art system through its glitches. This paper aims to regard these glitches as potentially revealing of the art system, and even poetic at times. We also look at them as a way of revealing the inherent fallibility of the commercial use of AI and machine learning to catalogue the world: it cannot comprehend other ways of knowing about the world, outside the logic of the algorithm. But, at the same time, due to their “glitchy” capacity to level and reimagine, these faulty readings can also serve as a new way of reading art; a new way for thinking critically about the art image in a moment when visual culture has changed form to hybrids of human–machine cognition and “machine-to-machine seeing”.
Similar content being viewed by others
Notes
The story of the Fountain has been under dispute in recent years. Research by historian Irene Gammel indicates there is evidence that the piece was actually created by dada artist Baroness Elsa, although Duchamp was the one that ultimately proposed it to the jury.
They were chosen because the first exhibition deals directly with the changes in the status of art in modernity, especially its reproducibility, and the second is dedicated almost exclusively to contemporary art, much of which dissociates what is seen from its signification.
It is worth noting that the subject matter of this research (artworks from a collection) is particularly suitable for this approach, since, unlike predictive policing and other egregious algorithmic systems, these errors do not directly cause harm.
To be clear, not all of the commercially available AIs we used are based on ImageNet, but the project was responsible for triggering a spark. By providing plenty of data about objects and their properties, and creating multiple competitions around it, the field became legitimated and useful for the industry. If an AI does not use it, it is certainly made in connection to it.
And to support the military, but this arguably happens through other systems based on the commercially available ones; or through military grants, which also underlie the whole system.
A possible consequence of this is: why are museums using these same AIs, in so many projects with Google Arts & Culture, for example?
Images with nude women, as in the painting Liggend Naakt (1931), by Jan Sluijters, or even dressed, as in Moeder en Kind (1922), by Gust de Smet, and Boerderij (1919), by Heinrich Campendonk. The same has also happened with images of more abstract sculptures, perhaps because of possibly phallic shapes, such as in My neck, my back curve silently (1930), by Karin Arink.
References
Amoore L (2019) Doubt and the algorithm: on the partial accounts of machine learning. Theory Cult Soc 36(6):147–169. https://doi.org/10.1177/0263276419851846
Berger J (2008) Ways of seeing. Penguin, London
Bowker GC, Star SL (2000) Sorting things out: classification and its consequences. MIT Press, Cambridge
Brayne S (2017) Big data surveillance: the case of policing. Am Sociol Rev 82(5):977–1008. https://doi.org/10.1177/0003122417725865
Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. 1st conference on fairness, accountability and transparency. PMLR 81:77–91
Cheney-Lippold J (2011) A new algorithmic identity. Theory Cult Soc 28(6):164–181. https://doi.org/10.1177/0263276411424420
Cox G (2017) Ways of machine seeing: an introduction. Peer Rev J About 6(1). https://www.aprja.net/ways-of-machine-seeing-an-introduction/
Crawford K (2018) AI Now: social and political questions for artificial intelligence. Distinguished lecture presented at the Tech Policy Lab/University of Washington, Seattle. https://youtu.be/a2IT7gWBfaE. Accessed 19 Feb 2019
Crawford K, Joler V (2018) Anatomy of an AI system: the amazon Echo as an anatomical map of human labor, data and planetary resources. AI Now Institute and Share Lab, (September 7, 2018) https://anatomyof.ai
Crawford K, Paglen T (2019) Excavating AI: the politics of training sets for machine learning. https://www.excavating.ai/. Accessed 1 Jun 2020
D’Ignazio C, Klein LF (2020) Data feminism. MIT Press, Cambridge
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2009.5206848
Difallah D, Filatova E, Ipeirotis P (2018) Demographics and dynamics of mechanical turk workers. In: Proceedings from proceedings of the eleventh ACM international conference on web search and data mining—WSDM ‘18, New York
Elish MC, Boyd D (2018) Situating methods in the magic of big data and AI. Commun Monogr 85(1):57–80. https://doi.org/10.1080/03637751.2017.1375130
Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St Martin’s Press, NY
Finn E (2017) What algorithms want. MIT Press, Cambridge
Fraser A (2005) Was ist institutionskritik? Texte Zur Kunst 59
Freire C (2006) Arte conceitual. Jorge Zahar Editora, Rio de Janeiro
Gillespie T (2014) The relevance of algorithms. In: Gillespie T, Boczkowski PJ, Foot KA (eds) Media technologies: Essays on communication, materiality, and society. MIT Press, Cambridge
Gillespie T (2016) Algorithm. In: Peters B (ed) Digital keywords: a vocabulary of information society and culture. Princeton University Press, NJ, pp 18–30
Gillespie T (2018) Custodians of the internet. Yale University Press
Gray ML, Suri S (2019) Ghost work: how to stop silicon valley from building a new global underclass. Houghton Mifflin Harcourt
Google (2019) AI & machine learning products: cloud vision. https://cloud.google.com/vision/ Accessed 18 Feb 2019
Haraway D (1988) Situated knowledges: the science question in feminism and the privilege of partial perspective. Fem Stud 14(3):575. https://doi.org/10.2307/3178066
ImageNet (2019) Skateboard. https://imagenet.stanford.edu/synset?wnid=n04225987. Accessed 7 Mar 2019
Irani LC, Silberman MS (2013) Turkopticon. In: Proceedings from proceedings of the SIGCHI conference on human factors in computing systems—CHI ‘13, New York
Li FF (2015) How we’re teaching computers to understand pictures. Retrieved 2020–08-07 from https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures?language=en
Mackenzie A (2015) The production of prediction: What does machine learning want. Euro J Cult Stud 18(4–5):429–445. https://doi.org/10.1177/1367549415577384
Mackenzie A (2017) Machine learners: archaeology of a data practice. MIT Press, Cambridge
Mintz A, Silva T, Gobbo B, Pilipets E, Azhar H, Takamitsu H, Omena J, Oliveira T (2019) Interrogating vision APIs. SMART data sprint: beyond visible engagement. https://smart.inovamedialab.org/smart-2019/project-reports/interrogating-vision-apis/. Accessed 1 Jun 2020
Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, NY
O’Neil C (2016) Weapons of math destruction. Crown Books, Largo
Olah C, Satyanarayan A, Johnson I, Carter S, Schubert L, Ye K, Mordvintsev A (2018) The building blocks of interpretability. Distill. https://doi.org/10.23915/distill.00010
Paglen T (2016) Invisible Images (Your Pictures Are Looking at You). The New Inquiry. https://thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/. Accessed 1 Jun 2020
Parker R, Pollock G (2013) Old mistresses: women, art, and ideology. I.B Tauris, London
Powles J, Nissenbaum H (2018) The seductive diversion of ‘solving’ bias in artificial intelligence. https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53. Accessed 1 Jun 2020
Simanowski R (2016) Data love: the seduction and betrayal of digital technologies. Columbia University Press, NY
Tomkins C (1998) Duchamp: a biography. Holt Paperbacks, NY
Warburg A (2010) Atlas Mnemosyne. Akal, Madrid
Acknowledgements
Portions of this article appeared in a different version in the Van Abbemuseum’s “Deviant Practice Research Programme 2018-19” electronic publication (CC BY-NC-ND 4.0). We’d like to thank the special issue editors, reviewers, and others that have contributed and supported this research project. Special thanks to Giselle Beiguelman, the staff of the Van Abbemuseum (especially Nick Aikens, Evelien Scheltinga and Christiane Berndes), and the Center for Arts, Design and Social Research.
Funding
This research has received funding from the Deviant Practice Research Programme at the Van Abbemuseum (Netherlands), and the Center for Arts, Design and Social Research.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Pereira, G., Moreschi, B. Artificial intelligence and institutional critique 2.0: unexpected ways of seeing with computer vision. AI & Soc 36, 1201–1223 (2021). https://doi.org/10.1007/s00146-020-01059-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-020-01059-y