PROCEEDINGS
Art Machines: International Symposium on Computational Media Art
Proceedings
Editor: Richard William Allen
Co-Editors: Olli Tapio Leino, Malina Siu, Sureshika Piyasena
Cover Design: Trilingua
Copyright 2018 ©All rights reserved by the Individual Authors, School of
Creative Media, City University of Hong Kong.
No part of this publication may be reproduced, stored in a retrieval system,
transmitted in any form or by any means, without prior written permission of
the Individual Authors and Conference Director of Art Machines:
International Symposium on Computational Media Art.
Individual authors of papers and abstracts are solely responsible for all
materials submitted for the publication. The publisher and the editors do not
warrant or assume any legal responsibilities for the publication’s content. All
reflec those of
opinions expressed in the book are of the authors and do not reflect
the publisher and the editors.
Published by: School of Creative Media, City University of Hong Kong
81 Tat Chee Avenue, Kowloon Tong, Hong Kong
Printed in Hong Kong
ISBN: 978-962-442-421-8
Presented by:
Sponsored by:
Art Machines: International Symposium on Computational Media Art Organizing Committee
Conference Director
Richard William Allen
School of Creative Media
City University of Hong Kong
Conference Co-Directors
Machine Learning and Art Plenaries and Panels
Hector Rodriguez
School of Creative Media
City University of Hong Kong
Tomas Laurenzo
School of Creative Media
City University of Hong Kong
Open Call Conference: Scholarly Abstracts
Damien Charrieras
School of Creative Media
City University of Hong Kong
Olli Tapio Leino
School of Creative Media
City University of Hong Kong
Open Call Conference: Artistic Project Abstracts
Harald Kraemer
School of Creative Media
City University of Hong Kong
Tobias Klein
School of Creative Media
City University of Hong Kong
“Algorithmic Art: Shuffling Space and Time” Exhibition Curator
Linda Lai
School of Creative Media
City University of Hong Kong
Conference Committee
Maurice Benayoun
School of Creative Media
City University of Hong Kong
Lam Miu Ling
School of Creative Media
City University of Hong Kong
Fion Ng
School of Creative Media
City University of Hong Kong
Malina Siu
School of Creative Media
City University of Hong Kong
Conference Coordinators
Jae Cheung
School of Creative Media
City University of Hong Kong
Choi Hoi Ling
School of Creative Media
City University of Hong Kong
PhD Student-led Salon Co-Curators
Ashley Wong
School of Creative Media
City University of Hong Kong
Mariana Perez-Bobadilla
School of Creative Media
City University of Hong Kong
Preface and Acknowledgements
These are the official proceeding of Art Machines, the 1st International Symposium on Computational Media Art, which
was held in Hong Kong from 4th -7th January, 2019, and organized and hosted by the School of Creative Media, City
University of Hong Kong. The conference title, Art Machines, refers to the conference theme that focused upon Machine
Learning and Art. The conference consisted of four plenary sessions organized around the theme of Machine Learning and
Art, a keynote symposium on Robotics and Art, two keynote addresses, open call panels, and a student-run salon. It was
accompanied by a sophisticated, high-level exhibition, Algorithmic Art: Shuffling Space and Time, which contextualized
contemporary computer-based art in relationship to the history of this practice in the region, and was curated by Dr. Linda
Lai in Hong Kong City Hall.
Plenary panels and keynotes were solicited by invitation and the breakout panels were solicited by open call. Open
call papers were solicited both on the conference theme of Machine Learning and Art and on broader themes pertaining
to computational media art in general. In the end, over 50 percent of the accepted papers directly addressed the main
conference theme. The overall acceptance rate was 60%. Contributions were invited under four categories. Full papers
were subject to double-blind peer review. Conference abstracts, solicited under two categories: artistic abstracts and
scholarly abstracts, were reviewed by the organizing committee. Finally, poster presentations were invited, but since
few were accepted, these were folded into the conference abstracts. This volume contains the accepted full papers (6),
together with the abstracts of the scholarly papers (29) and artistic papers (27) that were presented at the conference.
This distinction between scholarly and artistic abstracts is not an absolute one, but reflects the difference between
papers which consisted primarily of an artist making an analytical presentation of his or her work and a scholarly
inquiry in the field.
The conference organizing committee consisted of nine faculty from School of Creative Media who divided
different responsibilities between them: Dr. Linda Lai directed the exhibition Algorithmic Art. Dr. Hector Rodriguez
and Dr. Tomas Laurenzo, who early in 2018 organized a successful conference on Machine Learning and Art in
Cordoba, Spain called Ars Incognita, came up with the conference theme and took responsibility for selecting the
participants in the the plenaries and panels on Machine Learning and Art. Dr. Harald Kraemer and Mr. Tobias Klein
reviewed the artistic abstracts. Dr. Olli Tapio Leino and Dr. Damien Charrieras reviewed the scholarly abstracts and
Dr. Leino also oversaw the review process of the full papers. Prof. Maurice Benayoun assisted with the organization
of the student salon. Dr. Miu Ling Lam helped secure financial support from the Croucher Foundation and some of
the plenary contributions. I want to thank them all for their hard work in making this conference possible. I also want
to thank the leaders of the student salon Ashley Wong and Mariana Perez-Bobadilla. I would like to offer a special
acknowledgement to Dr. Leino for his wise counsel and support throughout. The expertise he brought to organizing
of this conference from his outstanding leadership of ISEA 2016 was invaluable, including providing the template for
this volume.
Art Machines and its accompanying exhibition, Algorithmic Art, would not have been possible without the support
of a number of key organizations and individuals. I offer grateful thanks to our financial donors: City University of Hong
Kong; The Innovation and Technology Fund, Hong Kong; The Leisure and Cultural Services Department (LSCD), Hong
Kong; The U.S Consulate General in Hong Kong & Macau; The Croucher Foundation; and The Cultural and Sports
Committee, City University of Hong Kong. My special thanks to Dr. Louis Ng, Deputy Director, LCSD, Prof. Alex Jen,
Provost, CityU, and Prof. Horace Ip, Vice President, CityU. This volume was prepared before the conference and given
to every delegate. Thanks to all of you who responded to our call and participated in this conference and thanks, too, to
the various session chairs and moderators. Finally, I want to thank Fion Ng who helped us raise money and co-ordinate
Algorithmic Art, and give special thanks to Ms. Malina Siu. From day one, Malina took charge of the whole process,
and together with our team, Ms. Jae Cheung Oi Lun and Dr. Sureshika Piyasena, put in a lot of hard work to ensure that
presentation of this volume was of the highest standard.
Richard William Allen
Conference Director, Art Machines: ISCMA 2019
Dean, School of Creative Media
Chair Professor of Film and Media Art
City University of Hong Kong
vii
Contents
Preface and Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I Full Papers (peer-reviewed)
vii
1
1 2.5D Computational Image Stippling. Kin-Ming Wong, Tien-Tsin Wong . . . . . . . . . . . . . . . . . . .
2
2 Artistic Intelligence. Ray LC (Luo) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3 CG-Art: Demystifying the Anthropocentric Bias of Artistic Creativity. Leonardo Arriagada . . . . . . . . .
20
4 Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with Deep Recurrent Nets for Text Generation.
Sofian Audry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
5 How does a Machine Judge Photos?. Wasim Ahmad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
6 Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic Art. Harald Kraemer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
II Scholarly Abstracts
57
7 The Present Tense of Virtual Space. Andrew Burrell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
8 Computational Photography. Yeon-Kyoung Lim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
9 import <execute> [as <command>]. Korsten, De Jong . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
10 The (un)predictability of Text-Based Processing in Machine Learning Art. Winnie Soon . . . . . . . . . . .
64
11 The Viewer Under Surveillance from the Interactive Artwork. Raivo Kelomees . . . . . . . . . . . . . . . .
66
12 The Demiurge, or a Manifestation of Carbo-Silico Evolution. Jaden Hastings . . . . . . . . . . . . . . . . .
69
13 Art Chasing Liability: Digital Sharecropping and Conscientious Law-Breaking. Monica Lee Steinberg . . .
72
14 Audiovisual Experiments with Evolutionary Games, and the Evolution of a Work-in-progress. Stefano Kalonaris 74
15 Artificial Intelligence, Artists, and Art: Attitudes Toward Artwork Produced by Humans vs. Artificial Intelligence. Joo-Wha Hong, Nathaniel Ming Curran . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
16 Introducing Machine Learning in the Creative Communities: A Case Study Workshop. Matteo Loglio, Serena
Cangiano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
17 Storytelling for Virtual Reality Film: Structure, Genre, Immersive and Interactive Narrative. Ka Lok Sobel
Chan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
18 Generation of a Multi-pictorial Script. Haytham Nawar . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
19 Speculation and Acceleration: Financialization, Art & The Blockchain. Ashley Lee Wong . . . . . . . . . .
85
20 Aesthetic Coding: Exploring Computational Culture Beyond Creative Coding. Winnie Soon, Shelly Knotts .
87
21 Distributed Cognition in Ecological / Digital Art. Scott Rettberg . . . . . . . . . . . . . . . . . . . . . . . .
89
22 Playing with the Sound. Wing On Tse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
23 Art and Automation: The Role of the Artist in an Automated Future. Lodewijk Heylen . . . . . . . . . . . .
94
24 Atom, Bit, Coin, Transactional Art Between Sublimation and Reification. Maurice Benayoun, Tobias Klein
96
25 Facial (Re)Cognition: Windows and Mirrors, and Screens. Megan Olinger . . . . . . . . . . . . . . . . . .
99
26 Are Photographers Superfluous? The Autonomous Camera. Elke Reinhuber . . . . . . . . . . . . . . . . . 101
27 How Machines See the World: Understanding Image Labelling. Carloalberto Treccani . . . . . . . . . . . . 104
28 The Struggle Between Text and Reader Control in Chinese Calligraphy Machines. Yue-Jin Ho . . . . . . . . 106
29 Bacterial Mechanisms: Material Speculation on Posthuman Cognition. Mariana Pérez Bobadilla . . . . . . 108
30 Lying Sophia and Mocking Alexa – An Exhibition on AI and Art. Iris Xinru Long . . . . . . . . . . . . . . 110
31 Art of Our Times: A Temporal Position to Art and Change. Tanya Toft Ag . . . . . . . . . . . . . . . . . . 112
32 Do Machines Produce Art? No. (A Systems-Theoretic Answer). Michael Straeubig . . . . . . . . . . . . . 114
33 The Janus-Face of Facial Recognition Software. Romi Mikulinsky . . . . . . . . . . . . . . . . . . . . . . . 116
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
ix
Contents
34 A Pixel-Free Display Using Squid’s Chromatophores. Juppo Yokokawa, Haruki Muta, Ryo Adachi, Hiroshi
Ito, Kazuhiro Jo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
35 VR and AI: The Interface for Human and Non-Human Agents. Lukasz Mirocha . . . . . . . . . . . . . . . 120
III Artistic project abstracts
36
37
38
39
SHAPES of the Future: When Art Machines Pass the Turing Test. Terry Trickett . . . . . . . .
Opinions – Body Movements and Sound. Yanbin Song . . . . . . . . . . . . . . . . . . . . . .
Constellation – Call Your Personalized Constellation. Nan Zhao . . . . . . . . . . . . . . . . .
The Dancer in the Machine. Simon Biggs, Sue Hawksley, Samya Bagchi, Mark D. McDonnell
123
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
124
127
130
132
40 I’m evolving into a box. The Paradoxical Condition in AI. Wei-Yu Chen . . . . . . . . . . . . . . . . . . . . 135
41 Volumetric Black. Triton Mobley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
AIBO – Artificially Intelligent Brain Opera – An Artistic Work-in-Progress Rapid Prototype. Ellen Pearlman
Artificial Digitality. Kuldeep Gohel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Specimens of the Globe: Generative Sculpture in the Age of Anthropocene. Gyung Jin Shin . . . . . . . . .
Machine Learning for Performative Spaces. Alex Davies, Brad Miller, Boris Bagattini . . . . . . . . . . .
Penelope. Alejandro Albornoz, Roderick Coover, Scott Rettberg . . . . . . . . . . . . . . . . . . . . . .
Hypomnesia, Game of Memory. Wanqi Li, Jian Guan . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Up-Close Experiences with Robots. Louis-Philippe Demers . . . . . . . . . . . . . . . . . . . . . . . . . .
Membrane or How to Produce Algorithmic Fiction. Ursula Damm, Peter Serocka . . . . . . . . . . . . . .
The Fresnel Video Lens. Steve Boyer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MAC Check. Scott Fitzgerald . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Visualizing Algorithms: Mistakes, Bias, Interpretability. Catherine Griffiths . . . . . . . . . . . . . . . . .
Multimedia Art: The Synthesis of Machine-generated Poetry and Virtual Landscapes. Suzana Ilić, Martina
Jole Moro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Microbial Sonorities. Carlos Castellanos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The 360° Video Secret Detours as Case Study to Convey Experiences through Immersive Media and the Method
of Presentation. Elke Reinhuber, Benjamin Seide, Ross Williams . . . . . . . . . . . . . . . . . . . . . . .
Parallax Relax: Expanded Stereoscopy. Max Hattler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Electronic Curator or How to Ride Your CycleGAN. Eyal Gruss, Eran Hadas . . . . . . . . . . . . . .
Das Fremde Robot Installation. Michael Spranger, Stéphane Noel . . . . . . . . . . . . . . . . . . . . . .
Repopulating the City: Introducing Urban Electronic Wildlife. Guillaume Slizewicz, Greg Nijs . . . . . . .
Anonymous Conjecture. Fangqing He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adversarial Ornament Attack. Michal Jurgielewicz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
140
142
144
145
147
149
151
154
156
158
160
162
164
167
169
170
172
174
176
178
62 The Time Machine: a Multiscreen Generative Video Artwork. Daniel Buzzo . . . . . . . . . . . . . . . . . 181
IV Review Board
x
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
183
Part I
Full Papers (peer-reviewed)
1
2.5D Computational Image Stippling
Kin-Ming Wong
Tien-Tsin Wong
artixels
mwkm@artixels.com
The Chinese University of Hong Kong
ttwong@cse.cuhk.edu.hk
(a) Input pair
(image + depth).
Fig. 1.
(b) Regular stippling.
(c) Stippling with depth of field using
our method.
2.5D computational image stippling examples (10,240 points)
Abstract
We present a novel 2.5D1 image stippling
process that renders the photographic depth-offield effect direct as an integral feature without
any need of image filtering computation. Our
approach relies on an additional depth image to
produce the effect. The proposed method is
based on a recent physically based blue noise
sampling technique, which allows sampling
naturally from spatial data, such as a 3D point
cloud. The separation of the image data and its
spatial information under our proposed 2.5D
setting enables additional creative possibilities
of image stippling art. Our approach can also
produce an animated sequence that mimics the
rack focus effect with good temporal coherence.
1. Introduction
Image stippling has a long history, dating back
to the 16th century as a printmaking technique
introduced by Giulio Campagnola [1] for
reproducing smooth tones, shading and image
details. This image-making technique uses only
strong tone dots as the sole pictorial elements,
and it demands an extremely skilful spatial
arrangement. After centuries, stippling is still
ubiquitous because of its unique aesthetics, the
transparency of the process, and its simplicity as
an art form.
Computational image stippling connects
tightly to blue noise adaptive sampling
techniques. Deussen and Isenberg [2] offer an
excellent comprehensive review of its
development. The term blue noise was formally
defined and characterized by Ulichney [3] in his
dithering research work. Figure 1b shows an
example of how the structureless blue noise
points reproduce pleasantly the underlying
image tone with subtly varying yet uniform
distribution.
Early research work in computer graphics
related to blue noise and image stippling was
driven by the need for tone reproduction
improvement for early digital printing and
display devices. Floyd and Steinberg [4]
proposed the error diffusion technique, which
stands as one of the best examples of how
dithering improves tone reproduction. In the
rendering research community, Dippé and Wold
[5] proposed the use of Poisson disk sampling in
rendering with reference to work on the study of
spatial pattern of photo-receptors by Yellott [6].
1
2.5D image processing refers to techniques which take advantage of the perpixel distance from camera information, i.e. depth information
2
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
2.5D Computational Image Stippling. Kin-Ming Wong, Tien-Tsin Wong
Cook [7] further popularized the effectiveness of
Poisson disk sampling, which is effectively a
quality blue noise sampling point set.
Stippling-focused research work proposed by
Deussen [8] relies on the relaxation technique
proposed by Lloyd [9] to produce quality stipple
drawings. To enable a more interactive
experience, Secord [10] introduced a
precomputed stipple tile-based approach, along
with the weighted Voronoi method.
Ostromoukhov et al. [11] and Kopf et al. [12]
proposed improved tile-based acceleration
techniques for better interactive image stippling.
More modern blue noise research work by
Balzer et al. [13], namely the Capacity
Constrained Voronoi Tessellation (CCVT)
technique, is considered the state-of-the-art blue
noise sampling method. CCVT serves as an
important model, which inspired additional
work. One such work was proposed by De Goes
et al. [14], which formulated the capacity
constrained model into an optimal transport
problem, now commonly known as the BNOT
method. The kernel density model proposed by
Fattal [15] also set a new standard for blue noise
sampling quality.
There are computational image stippling
methods that are designed to improve the quality
or variety of image stippling art from different
perspectives. Pang et al. [16] proposed an
approach that emphasizes reproduction of the
structural details. Kim et al. [17] proposed an
example-based stippling method that enables the
use of sampled stippling patterns. Wei [18]
introduced multi-class sampling, which enables
more sophisticated stippling possibilities, and Li
et al. [19] proposed an anisotropic technique,
which substitutes dots with adaptive thin
directional pictorial elements. Li and Mould
[20] proposed a structure aware stippling
method, which allows user-defined priority of
stipple emphasis.
For the depth-of-field effect, there is no
shortage of bitmap image filtering-based
techniques [21, 22, 23], which render the
photographic effect using an additional depth
image. To the best of our knowledge, there has
been no attempt to introduce photographic
effects to the image stippling process as an
integral feature without any pre-processing of
the input image.
Our proposed 2.5D image stippling method
renders the depth-of-field effect as a
computation-free feature. We rely on the
physically based blue noise sampling technique
proposed by Wong and Wong [24] as the core of
our approach. This sampling technique models
the sample points as electrically charged
particles, which self-organize by movement to
reach an equilibrium. We apply an intuitive
extension to this blue noise sampling method so
that 2.5D image data can be adaptively sampled.
This dynamics-based approach also allows us to
produce an animated rack focus effect by
changing the focus distance during simulation;
the animated result shows stable temporal
coherence.
In section 2, we give a brief overview of the
blue noise sampling technique used in our
method and how it inspired our work. Section 3
describes the details of our extension for 2.5D
image data sampling. In section 4, we
demonstrate and evaluate the depth of field
enabled stippling results from an artistic point of
view. And in section 5, we discuss a few creative
stippling applications based on our method.
2. Physically based Blue Noise Sampling
In this section, we review the blue noise
sampling technique proposed by Wong and
Wong [24], which serves as the foundation of
our 2.5 image stippling method. This sampling
method proposed a very intuitive approach,
which models the sampling points as a system of
electrically charged particles, with each carrying
an identical charge. These like-charged particles
repel each other, and the system undergoes selforganization by movement until it reaches an
equilibrium state by maintaining a uniform
equidistant neighbourhood around each particle.
The particles' positions are then computed by
integrating the equations of motion using a
customized Velocity Verlet numerical integrator
[25, 24], described in the original article. The
whole idea is not totally innovative. It was first
suggested by Hanson [26] and later by Schmaltz
[27], but using a pure 2D electric field.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
3
Part I. Full Papers (peer-reviewed)
2.1 Uniform Sampling
Given a system of N particles constrained on an
(a) qs = 0.05.
(a) Uniform point set
with qs = 0.25.
(b) Power spectrum.
Fig. 2. Uniform sampling using the physically based blue
noise sampling method. [24]
imaginary 2D plane, the total electrostatic force
exerted on a particle pi based on Coulomb's
inverse-square law is governed by the following
equation (eq. 1):
𝑁𝑁
𝐹𝐹𝑖𝑖 = 𝑞𝑞𝑠𝑠2 ∑
1
𝑗𝑗≠𝑖𝑖 ‖𝑟𝑟𝑖𝑖 − 𝑟𝑟𝑗𝑗 ‖
2
𝑒𝑒̂𝑗𝑗,𝑖𝑖
where 𝑞𝑞𝑠𝑠 is the amount of charge carried by each
particle, 𝑟𝑟𝑖𝑖 and 𝑟𝑟𝑗𝑗 are the positions of particles 𝑝𝑝𝑖𝑖
and 𝑝𝑝𝑗𝑗 , respectively, and 𝑒𝑒̂𝑗𝑗,𝑖𝑖 is a unit vector
pointing from 𝑟𝑟𝑗𝑗 to 𝑟𝑟𝑖𝑖 , which represents the
direction of force. The process is simulated in a
periodic domain, and the particles self-organize
to reach an equilibrium state. Figure 2 shows a
uniform point set generated using this physically
based technique. This point set exhibits highquality blue noise characteristics and is reflected
by its power spectrum, as shown in Figure 2b.
2.2 Adaptive Sampling
What inspired our 2.5D image stippling
approach is the adaptive sampling model
proposed by this sampling method. To
adaptively sample a varying density function,
such as a bitmap image, the sampling method
creates an additional imaginary 2D plane,
named the density plane. On this new density
plane, a regular grid of M non-moving
attractively charged particles is created; each
particle's charge is determined by the
corresponding pixel that it represents. The
amount of charge 𝑞𝑞𝑘𝑘 carried by a given particle
4
(b) qs = 0.35.
Fig. 3. Impact of sampling particle’s charge 𝑞𝑞𝑠𝑠 on adaptive
sampling.
𝑝𝑝𝑘𝑘 on the density plane is defined as follows (eq.
2):
𝑞𝑞𝑘𝑘 = −𝐴𝐴(1.0 − 𝐼𝐼𝑘𝑘 )
where 𝐼𝐼𝑘𝑘 is the pixel's intensity value that the
particle 𝑝𝑝𝑘𝑘 represents, and 𝐴𝐴 is a positive valued
coefficient determined by the total charge of the
particles on the sampling plane. This
relationship guarantees a total balance of
potential. The force exerted on a particle 𝑝𝑝𝑖𝑖 on
the sampling plane by the charges on the density
plane is governed by the following equation (eq.
3):
𝑀𝑀
𝐺𝐺𝑖𝑖 = 𝑞𝑞𝑠𝑠2 ∑
𝑘𝑘=1
𝑞𝑞𝑘𝑘
𝑒𝑒̂
‖𝑟𝑟𝑖𝑖 − 𝑟𝑟𝑘𝑘 ‖2 𝑘𝑘,𝑖𝑖
The total force experienced by a particle 𝑝𝑝𝑖𝑖 can
be expressed as the sum of equations (1) and (3).
We carefully examined the stipple images
produced by this blue noise sampling method,
and we noticed that the amount of charge 𝑞𝑞𝑠𝑠
carried by the sampling particles has an
important impact on the overall image quality.
Figure 3 shows a pair of stipple images produced
using different values of 𝑞𝑞𝑠𝑠 . A higher value of
𝑞𝑞𝑠𝑠 produces an impression of better contrast. We
believe it is a logical consequence that the larger
force between sampling particles produces more
space in the areas of low density (or brighter
area), so it boosts the overall contrast. It is not
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
2.5D Computational Image Stippling. Kin-Ming Wong, Tien-Tsin Wong
(a) Small inter-plane
(b) Large inter-plane
distance.
distance.
Fig. 4. Adaptive Sampling with different amount of interplane distance.
hard to see that Figure 3b offers better contrast
than Figure 3a. For a lower a value of 𝑞𝑞𝑠𝑠 , we
note that the points are obviously less structured,
and they seem to be more sensitive to subtle
local image structures too. In our experience, a
higher value of 𝑞𝑞𝑠𝑠 accelerates the convergence if
it is a necessary factor to consider.
The density plane is by design placed tightly
and parallel to the sampling plane to control the
local density of the sampling particles. Wong
and Wong [24] briefly demonstrated the impact
of this inter-plane distance to the adaptive
sampling results, and they named it a parameter
for sharpness control. Figure 4 shows the effects
of this parameter. It has an intuitive physical
meaning here because according to Coulomb's
inverse-square law, attractive force should be
weakened and less localized when the distance
between the sampling and the density planes
increases, resulting in a stipple image that gives
a blurred impression, as shown in Figure 4b.
Although the force applied by the density plane,
as expressed in Equation (3), assumes a planar
arrangement of the particles, the model itself
does permit a 3D configuration, as mentioned in
Wong and Wong [24]. Our method exploits this
3D configuration possibility as the foundation of
our depth-of-field effect integrated stippling
technique.
3. 2.5D Image Stippling
By extending the idea of using a 2D density
plane for adaptive sampling, we propose
substituting the planar setup of density particles
with a height-field alike configuration. In our
new model, each density particle has its own
depth from the sampling plane defined by an
additional depth image. We also introduce a new
parameter , which defines the focus distance,
from the
so the density particles at a distance
sampling plane give an in-focus impression in
the stipple result.
To achieve this visual effect, we displace the
whole density field towards the sampling plane
by , so the in-focus density particles exert a
strong attraction to the sampling particles. Based
on this new proposal, we adapt Equation (3) to
accommodate the changes. The force exerted by
this new configuration is now governed by the
following equation (eq. 4):
𝑀𝑀
𝐺𝐺 𝑖𝑖 = 𝑞𝑞𝑠𝑠2 ∑
𝑘𝑘=1
𝑞𝑞𝑘𝑘
‖𝑟𝑟𝑖𝑖 − 𝑟𝑟 𝑘𝑘 ‖2
𝑒𝑒̂𝑘𝑘,𝑖𝑖
where 𝑟𝑟 𝑘𝑘 = 𝑟𝑟𝑘𝑘 − (0,0, ) is the new position
of density particle 𝑝𝑝𝑘𝑘 , 𝑒𝑒̂𝑘𝑘,𝑖𝑖 is a unit vector
pointing from 𝑟𝑟 𝑘𝑘 to 𝑟𝑟𝑖𝑖 , and
maintains a
minimum distance between particles to avoid
instability. To control the amount of depth of
field, the depth component of all density
particles can be globally scaled to achieve the
desired degree of field depth.
We use the same numerical integrator
described in Wong and Wong [24]; the
algorithm is outlined in Algorithm 1. Using
OpenGL compute shaders, we implemented a
simple GPU application based on our method.
Figure 5 shows an example of how our method
is used to create stipple images from the same
input with different focus distances. The average
computation time of this example is 326ms per
iteration, using an nVIDIA Geforce GT 650M
mobile GPU.
______________________________________
Algorithm 1 Numerical Integrator
1. Position Update:
(
)= ( )
2. Acceleration Update:
Compute (
) using (
3. Velocity Update:
(
4. Repeat
)=
1
( , ( )
(
,
( )
1
( )
2)
)
( ( )
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
(
)) )
5
Part I. Full Papers (peer-reviewed)
(a) Input pair
(image + depth).
Fig. 5.
(b) Focus on the front, depth = 0.25.
(c) Focus on the back, depth = 0.55.
Image stippling examples with depth-of-field effect using our method; both used 𝑞𝑞𝑠𝑠 = 0.3 and 150 iterations to converge.
where is a user-defined damping factor of a
range of [0,1), which improves convergence.
We find that a value of 0.95 works best in most
scenarios. defines the maximum per time-step
displacement of each particle, which we keep
constantly at 0.002, using a normalized
coordinate system in our periodic simulation
setting.
4. Evaluation
In this section, we evaluate the visual quality
and image characteristics of our rendered
output. In the PDF version of this paper, all
stipple images are embedded in vector form for
better visual examination.
4.1 Pre-filtered Depth of Field
The depth-of-field effect is traditionally
achieved by applying adaptive filtering to a
bitmap image, based on a depth map. We
evaluate the qualitative difference between our
results using the traditional approach from an
artistic point of view instead of a technical one
because our method is not designed to parallel
or match the filtering result of the bitmap imagebased technique.
We used commercial software [28] to obtain a
pre-filtered bitmap, which is made to match the
degree of depth of field in Figure 5b. Figure 6b
shows a regular stippling result of the prefiltered depth-of-field input using our method; it
is not hard to observe that the stipple image
using pre-filtered input maintains better contrast
6
(a) Pre-filtered
input.
Fig. 6.
(b) qs = 0.3.
Stipple image of the pre-filtered depth of field image.
and a stronger photographic impression. Our
depth-of-field result in Figure 5b, however, has
a stronger illustrational and handcrafted quality.
As our approach does not intend to accurately
simulate the bitmap image filtering process, we
believe that our result has a unique look with its
own aesthetic qualities.
4.2 Degree of Depth of Field
Our model allows different degrees of depth of
field by globally scaling the depth component of
the input depth map. Figure 8 shows two
stippling results rendered with different depth
scaling factors, while all other settings remain
identical. The one with shallow depth of field,
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
2.5D Computational Image Stippling. Kin-Ming Wong, Tien-Tsin Wong
Fig. 7.
(a) Medium depth of field.
(a) Particle charge qs = 0.1.
(b) Shallow depth of field.
(b) Particle charge qs = 0.5.
Different degrees of depth of field image.
shown in Figure 7b, demonstrates stronger tone
and local contrast on the dark in-focus areas. We
believe this is a consequence of the relatively
stronger attraction force and denser in-focus
neighbourhood.
4.3 Tone and Feature Reproduction
Characteristics
As mentioned above, the sampling particle's
charge has an impact on the overall image
contrast. This is an inherent property of the
sampling method [24], but we take a deeper look
at how this parameter 𝑞𝑞𝑠𝑠 affects the overall
image quality. We use a pair of stipple images
with the same depth of field settings using a
lower number of sample points (5,120 points) to
illustrate our observations more clearly.
Fig. 8.
Effects of particle charge.
Figure 8a is produced using a smaller particle
charge. It is not hard to observe that the stipple
points on this image are far less structured than
the ones in Figure 8b. The stipple points rely on
various subtle and continuously varying density
distributions to reveal the underlying image.
This characteristic helps to maintain the subtle
local tonal changes, and the whole image
possesses a more organic quality from an artistic
point of view.
In contrast, the stipple points in Figure 8b are
more structurally organized; this is especially
clear on the silhouettes and other sharp features.
The overall image has more technical clarity,
and better overall image contrast. We believe
this setting is good for instructional or graphical
illustration purposes.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
7
Part I. Full Papers (peer-reviewed)
(a) Mixed inputs.
Fig. 9.
(b) Stipple output.
Mixed input for stylized stippling.
(a) Mixed inputs.
(b) Stipple output.
Fig. 10. Mixed input for graphic design.
5. Creative Possibilities
In this section, we explore various creative
possibilities with our proposed method, ranging
from general manipulation to photographic
processing and animated sequence output.
5.1 Mixed Input as Masked Processing
As our method relies on a separate given depth
image, users can always use a depth map that is
not necessarily related to the image as a means
to achieve other creative effects. Figures 9 and
10 show two creative uses of mixing an
unrelated depth map to an image map to create a
masked stippling.
5.2 Image Processing
To render the depth-of-field effect for bitmap
images, image features more distant from the
focus require more processing because of a
larger filter kernel to process, but this does not
8
(a) Input pair.
(b) Stipple output.
inputs.
Fig. 11. Tilt-shift alike image filtering.
apply to our stippling method. For general
bitmap image processing based on convolution,
we may loosely relate the filter kernel radius in
bitmap image processing with the depth
component of a density particle in our method.
As an example of this connection, we follow
how bitmap image processing creates a tilt-shift
effect to a given image; this is usually achieved
by applying a blurring process with a global
radially increasing filter kernel radius. We
reproduce it with a depth map which mimics the
approach. Figure 11 shows the input pair and the
result.
We believe this analogy between the kernel
radius and the density particle's depth would
serve as a good research direction for exploring
systematic processing techniques for stipple
images, or more precisely, point-based images.
5.3 Temporal Coherence of Stipple Image
Sequence
We include with this paper a short video as
supplemental material to demonstrate how our
dynamics-based stippling method can be used to
generate an animated sequence of stipple images
that mimics the rack-focus effect. It can be used
direct as the initialization point set for the next
stipple computation. As long as the focus
distance shifts slowly, the convergence of the
new stipple image can happen in one or just a
few time-steps in our experience.
More importantly, the two consecutive stipple
images often demonstrate good temporal
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
2.5D Computational Image Stippling. Kin-Ming Wong, Tien-Tsin Wong
coherence. This is the advantage of the global,
dynamics-based blue noise method proposed by
Wong and Wong. [24] This temporal coherence
is often hard to achieve with the sequential
method or algorithms which rely on
randomization.
Theoretically, this temporal coherence
characteristic should also apply to animated
video clip input, provided there is no vigorous
change in image content, but this potential was
not explored in the original paper.
(a) Regular stipple image.
Fig. 12. Per-iteration time performance on GT650M.
6. Performance
We implemented a simple graphics processing
unit (GPU) application using OpenGL compute
shaders without any specialized algorithmic
acceleration. Stippling computation time
depends only on the number of sample points
and the input image size; the degree of depth of
field has no impact on our performance. For a
stippling of 10,240 points and an input bitmap
of size 256 256, each iteration takes less than
150ms on a modest Geforce GT650M notebook
GPU.
Our compute shader parallelizes in a per
sample point fashion, and the OpenGL compute
shader allows us to maximize the use of local
memory to minimize the GPU global memory
bottleneck. A summary of timing information is
provided in Figure 12, showing how
computation time increases with the number of
sample points under different input bitmap sizes.
Although we believe our method should run
impressively on more modern GPUs, to
compute stippling with several hundred
(b) Our stipple result with depth of field.
Fig. 13. Inconsistency of perceived brightness.
thousand sample points at an interactive rate, an
algorithmic level acceleration is definitely
necessary. The physically based blue noise
sampling method [24] we use is practically an
N-Body simulation, so any algorithmic
acceleration for an N-Body simulation should
work for our method too. The multi-level
summation method proposed by Hardy et al.
[29] and the non-equidistant fast Fourier
transform-based acceleration method by
Gwosdek et al. [30] are both applicable to our
method.
In addition, the electric field of the density
particles can be theoretically precomputed as a
high resolution look-up table for runtime
interpolation.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
9
Part I. Full Papers (peer-reviewed)
7. Discussion
We have presented a novel 2.5D image stippling
method which is able to render certain
photographic effects for free. Based on a global
blue noise sampling technique, our method
generates an animated sequence with effects
with good temporal coherence.
However, we are aware that our method
cannot maintain the consistency of the overall
image brightness across stipples. Figure 13
shows a pair of images; Figure 13a is a regular
stipple image, and in Figure 13b the depth-offield effect was applied. There is an obvious
tone difference between them, which can be
explained by the concentration of attraction
force. To provide overall brightness
consistency, we believe that an algorithm to
adjust the number of sample points has to be in
place. This could be considered for future
research.
References
1. G. Flocco, La giovinezza di Giulio
Campagnola’in L’Arte, vol. xvii (1915).
2. Oliver Deussen and Tobias Isenberg,
“Halftoning and stippling,” in Image and VideoBased Artistic Stylisation (Springer), 45–61.
3. Robert A. Ulichney, “Dithering with blue
noise,” Proc. IEEE 76, no. 1 (1988): 56–79.
4. Robert W. Floyd, “An adaptive algorithm for
spatial gray-scale,” Proc. Soc. Inf. Disp., 17
(1976): 75–77.
5. Mark A.Z. Dippé and Erling Henry Wold,
“Antialiasing through stochastic sampling,”
ACM Siggraph Computer Graphics 19, no. 3
(1985): 69–78.
6. John I. Yellott, “Spectral consequences of
photoreceptor sampling in the rhesus retina,”
Science 221, 4608 (1983).
7. Robert L. Cook, “Stochastic sampling in
computer graphics,” ACM Transactions on
Graphics (TOG) 5, no. 1 (1986): 51–72.
8. Raanan Fattal, “Blue-noise point sampling
using kernel density model,” ACM Transactions
on Graphics (TOG) 30 (2011): 48.
9. Stuart Lloyd, “Least squares quantization in
PCM,” IEEE transactions on information theory
28, no. 2 (1982): 129–137.
10
10. Adrian Secord, “Weighted voronoi
stippling,” ACM Proceedings of the 2nd
international symposium on non-photorealistic
animation and rendering, (2002): 37–43.
11. “The Foundry.” Nuke 10.0, Vol. 3 (2016).
12. Johannes Kopf, Daniel Cohen-Or, Oliver
Deussen, and Dani Lischinski, “Recursive
Wang tiles for real-time blue noise,” ACM 25
(2006).
13. Michael Balzer, Thomas Schlömer, and
Oliver Deussen, “Capacity-constrained point
distributions: a variant of Lloyd’s method,”
ACM Vol. 28 (2009).
14. Fernando De Goes, Katherine Breeden,
Victor Ostromoukhov, and Mathieu Desbrun,
“Blue noise through optimal transport,” ACM
Transactions on Graphics (TOG) 31, no. 6
(2012): 171.
15. Oliver Deussen, Stefan Hiller, Cornelius
Van Overveld, and Thomas Strothotte,
“Floating points: A method for computing
stipple drawings,” Computer Graphics Forum
19 (2000): 41–50.
16. Wai-Man Pang, Yingge Qu, Tien-Tsin
Wong, Daniel Cohen-Or, and Pheng-Ann Heng,
“Structure-aware
halftoning,”
ACM
Transactions on Graphics (TOG) 27 (2008): 89.
17. Sung Ye Kim, Ross Maciejewski, Tobias
Isenberg, William M. Andrews, Wei Chen,
Mario Costa Sousa, and David S. Ebert,
“Stippling by example,” ACM Proceedings of
the 7th International Symposium on Nonphotorealistic Animation and Rendering (2009):
41–50.
18. Li-Yi Wei, “Multi-class blue noise
sampling” ACM Transactions on Graphics
(TOG) 29, 4 (2010): 79.
19. Hua Li and David Mould, “Structurepreserving stippling by priority-based error
diffusion,”
Canadian
Human-Computer
Communications Society Proceedings of
Graphics Interface (2011): 127–134.
20. Hongwei Li, Li-Yi Wei, Pedro V Sander,
and Chi-Wing Fu, “Anisotropic blue noise
sampling,” ACM Transactions on Graphics
(TOG) 29 (2010): 167.
21. Joe Demers, “Depth of field: A survey of
techniques,” GPU Gems 1, 375 (2004), U390.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
2.5D Computational Image Stippling. Kin-Ming Wong, Tien-Tsin Wong
22. Jhonny Göransson and Andreas Karlsson,
“Practical post-process depth of field,” GPU
Gems 3 (2007), 583–606.
23. Martin Kraus and Magnus Strengert,
“Depth-of-Field Rendering by Pyramidal Image
Processing,” Computer Graphics Forum, 26
(2007): 645–654.
24. Kin-Ming Wong and Tien-Tsin Wong,
“Blue Noise Sampling using an N-Body based
Simulation Method,” The Visual Computer,
Proceedings
of
Computer
Graphics
International 33, 6-8 (2017): 823-832.
25. William C. Swope, Hans C. Andersen,
Peter H. Berens, and Kent R. Wilson, “A
computer simulation method for the calculation
of equilibrium constants for the formation of
physical clusters of molecules: Application to
small water clusters,” The Journal of Chemical
Physics 76, 1 (1982): 637–649.
26. Kenneth M. Hanson, “Halftoning and
Quasi-Monte Carlo,” Los Alamos National
Library (2005), 430–442.
27. Christian Schmaltz, Pascal Gwosdek,
Andrés Bruhn, and Joachim Weickert,
“Electrostatic halftoning,” Computer Graphics
Forum 29 (2010): 2313–2327.
28. Victor Ostromoukhov, Charles Donohue,
and Pierre-Marc Jodoin, “Fast hierarchical
importance sampling with blue noise properties,”
ACM Transactions on Graphics (TOG) 23
(2004): 488–495.
29. David J. Hardy, John E. Stone, and Klaus
Schulten,
“Multilevel
summation
of
electrostatic
potentials
using
graphics
processing units,” Parallel computing 35, 3
(2009):164–177.
30. Pascal Gwosdek, Christian Schmaltz,
Joachim Weickert, and Tanja Teuber, “Fast
electrostatic halftoning,” Journal of real-time
image processing 9, 2 (2014): 379–392.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
11
Artistic Intelligence
Ray LC [Luo]
Parsons School of Design
Brain and Mind Research Institute, Cornell Medical College
rayLC@newschool.edu [rayLC.org]
Abstract
Machine Learning (ML) has been applied in the
financial, medical and educational fields to
make, for example, smart stock predictors,
hospital robots, and virtual assistants. But their
use in one of the most human of endeavours,
creative expression, has been relatively
unexplored. Current applications of ML in
artistic endeavours employ mostly artificial
agents to extend human capabilities to realms
where access to extensive data provides
opportunities for associations previously
unexploited by human artists. These examples
take the human point of view first and merely
expand human ability, by generating novel
musical combinations based on a simple palette
of tones, analyzing image content to pick out
styles that serve as training for further image
transformations, or joining poetic text based on
phonetic similarities, for example. While these
applications rely on ML as a data-mining agent
in unexplored domains, they fail to exceed the
limits of human expectations of what they can
do. There’s another arena in which ML enables
artistic expression: using Artificial Intelligence
(AI) in unexpected ways in everything we
interact with. Imagine, for example, talking to a
human whose responses are generated by
Google Assistant, or interacting with a robot
who secretly wants to make you take
medication. I propose using ML to give novel
behaviours to objects we interact with, allowing
these behaviours to vary using predefined
parameters for training, which are unknown to
users. Applying ML to unexpected forms of
interactions changes what we think machines are
capable of, creating situations where AI goes
beyond human expectations of what machine
intelligence means to us, making objects oddly,
Artistically Intelligent.
12
Fig 0. Artistic Intelligence by Ray LC: sculptures imbued with
machine learning for creative expression. Source: Ray LC.
Introduction
Technology is taking over much of our daily
lives. Instead of memorizing epic poems passed
down through generations, as in Homer’s era,
humans invented books to record them. Now
instead of using physical paper as media, we
record information digitally and no longer need
books. We went from talking, singing and
memorizing, to recording, archiving and
searching when we need something. These new
tools have become integrated with human
capabilities and made us more powerful, with
the experience and findings of all previous
generations available at our fingertips. If
previously human capabilities, like way-finding,
calculating and memorizing, can be overtaken
by GPS, computer programs and the internet,
what other fundamentally human abilities will
be overtaken by the tools humans create?
The most unique thing about humans is our
ability to express ourselves by creating. Animals
and plants can transform their environments the
way we do, but they have limited means of
making tools to do their work, and they are even
more limited in the way in which they create
works of imagination. Studies have found cells
in the monkey cortex that react to the use of
tools, [1] but non-human primates are limited in
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Artistic Intelligence. Ray LC (Luo)
what they can do in open-ended cognitive tasks,
such as the inability to compose a picture. [2]
Humans, on the other hand, can create entire
worlds in their minds, invent hypothetical
scenarios and stories, and evaluate them, and
think of futures that may not correspond to
reality. We use ideas imaginatively much as we
use tools, talking about the hypothetical future
based on “what if” questions. [3] Can this
fundamentally human ability one day be
transferred to tools we invent? Will we make
Artificial Intelligence (AI) that creates with us,
or even more capably, creates for us? Can we
make an AI for Artistic Intelligence?
Our uniquely human creative potential comes
not from particular domains, like painting or
theatre, for many cultures exhibit creativity
without having venues in which to express them.
Instead, creativity can be defined in terms of the
ability to shape and improve ideas adaptively in
changing environments, [4] a task suitable for
Machine Learning (ML) once the goal state of
adaptation has been established. Tasks with a
simple goal state, like winning a chess game
have comparatively simple ML solutions,
because algorithms can simply search for more
and more effective ways of searching for a
solution to winning the game. In creative
endeavours, the goal state is less obvious to
humans, so we are unable to create machines
that do the task for us, just by virtue of the
ambiguity of what that task is actually trying to
do. A sculptor may create a sculpture as much
for its likeness to someone in her life (a welldefined goal) as for a need to expose societal
prejudices (a goal much harder to define
digitally). Hence, creative expression has so far
not been taken over by ML algorithms, because
it’s not clear what the algorithms should aim to
achieve.
One approach is to use ML to achieve what
human artists achieve by learning (copying) the
process of artifact creation. In this scheme, any
future “invention” by machines is coded for by
the creator, and ML is only a tool for templatebased creation. In contrast, another approach is
to make ML agents part of a human ecosystem
of creative works, exploiting our assumptions
about what machines that have humanoid
behaviours can or should do, giving voice to the
machine’s own Artistic Intelligence.
Background
The first approach of using ML to mimic human
creativity started with computer programs used
to make “novel” images. Harold Cohen’s
AARON robot was programmed by its creator
to make abstract drawings based on predefined
styles. Over the years AARON’s output looked
a lot like Cohen’s own evolving style, leading to
the question of what would happen after
Cohen’s death. Would AARON stop learning,
and if so, was it ever really creative, or simply
following patterns? Cohen’s contention is that
art does not require constant creativity, but
rather devising rules to follow and allowing the
pattern of rules to take over. [5] If this is the
case, AARON is only a translator from patterns
to artefacts, with some randomness added.
Fig 1. AARON: a robot used by artist Harold Cohen to make
abstract images autonomously using a routine programmed to
mimic Cohen’s own style. Source: technologyreview.com.
Other examples of ML art based on emulating
human styles and customizations include
ventures in digital image processing, like the
Pikazo app, which combines an image and a
style embodied by a painter in the history of art
or an uploaded texture to make a novel image
combination. The role of ML in the app is to
perform the combination process in a seamless
manner, using image recognition algorithms.
However, there’s no creativity for the AI in this
approach. Images from the Pikazo website show
clear filter-like manipulation of images using the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
13
Part I. Full Papers (peer-reviewed)
styles of various artists. Project Magenta
dispenses with idea of machine creativity and
instead focuses on algorithms that augment what
human creators can do. For example, in Beat
Blender, beat rhythms for music can be
generated by drawing a path through a spatialtemporal state space of beats, allowing the
musician to make creative content using an
intuitive feel for beats in time and patterns in
space. Project Magenta assumes that ML is used
to heighten what humans can do by creating
novel interfaces and creative combinations of
basic palettes enabled by artists, not by having
the algorithm generate ideas. Similar efforts in
the textual domain have been undertaken to
create machine-generated novels, such as
Allison Parrish’s Our Arrival.
Fig 2. Pikazo: an app that creates new images based on a
preselected style and an image to be modified. Source:
pikazoapp.com
While the majority of ML art projects use ML
to drive creativity, another segment of artists
have focused on what AI will do to the creative
process by focusing on understanding the
machine. In particular, they aim to understand
what is it about machine data mining that
undermines how people, as creatives, can
interact with the world. For example, ML
systems like Deep Mask and Tensor Flow
enable online systems to categorize people into
stereotypical forms and to use their private data
to make inferences about their lives. [6] With
machine surveillance fast becoming part of our
14
future, artists like Merijin Bolink are wondering
how best to understand machines in order to
coexist with them. In his “Google’s Eyes”
project, he used Google’s Goggles app to
iteratively identify a sculptural object. First he
created a ceramic tire, which when interpreted
by Goggles, returned a list of items, which
included a jawbone. Then Bolink made a plaster
copy of the jawbone and had Goggles interpret
it, which it identified as a hand. The complete
20-object series are placed together as a
representation of how machines interpret human
art, showing how the human creative potential
may be subverted by machine recognition.
While some artists like Bolink fear the rise of
ML in the creative process, others herald it as
the next phase of our evolution. In an early
treatise on machine creativity, Roger Schank
suggests that creativity can be defined as
innovative problem solving, and that looking for
“near misses” allows machines to hone in on
these miss patterns and come up with creative
modifications. [7] In a similar vein, arguments
have been made that human creative power can
be supplemented by machine interfaces, which
have access to a larger scope of data, which can
serve as raw material for powerful creative acts.
[8] A counter argument is that more data is not
necessarily useful, for great artists have often
been constrained in expressing their point of
view, which makes their work particularly
expressive given their limited scope. This can
create powerful emotion in those who have had
a similar experience. Perhaps artistic genius
comes from a combination of ML-like
exploration and human-like constraints, much
like the way trans-humanism puts machines and
humans together.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Artistic Intelligence. Ray LC (Luo)
Fig 3. Google’s Eye project: Each object is iteratively shown to
the Google app Goggles, which gives suggestions related to
what it sees using image recognition. Each suggestion then
becomes the next object. Source: fastcodesign.com
Fig 4. Example of AI in general domains. The machine is
intended to be programmed for one area (sales specialist) but
shocks the audience with human-level knowledge in another
field (influence, data collection). In the artistic domain, AI does
something unexpected based on preconceptions.
All the works discussed so far have applied
ML to enable or enrich creative processes. A
different approach to human-machine creativity
interaction is to realize that our reaction to
machines and what they are supposed to be
capable of in human terms can be used to imbue
them with intelligence and perceived
emotionality and creativity. To allow machines
to go beyond human creative potential, we have
to go beyond just what machines are capable of,
and instead, think about what is it in humans that
makes us think that this is what machines can do.
Creativity is about remaking processes, not
artefacts. What makes this process unique is that
by using what humans believe about machines
to subvert our preconceived notions, we are
making both humans and machines more
creative. We are more creative because we can
make tools that transcend boundaries to allow
them to work closely with us. Machines are
more creative because to the audience, they are
doing more than what stereotypical machines
do.
There’s a natural consequence to the approach
of using ML to transform what we think
machines should do, which is that our fears
about machines posing as humans or knowing
our every move will manifest itself as
uncertainty as to which part of the machine’s
response is from the machine and which is from
its programmer. This point is akin to going to a
website that offers interactive chats with a “sales
specialist.” After asking her a few questions,
you get the feeling that she is not from your
country and that perhaps she is contracted from
a foreign country, because her replies are
accurate but she uses unusual phrases. As the
order proceeds you realize that she has the
uncanny ability to know exactly what you have
been searching for and knows your online
identity and buying history from the past several
months. Is she a person or a machine? Does it
matter? Predictable AI does not produce a
creative machine. Truly creative AI machines
will possess an aura of mystery, which neither
the programmer nor the machine can explain.
Using ML to subvert what we think about ML
puts us in a world where machines and humans
are equals in their ability to influence: one is
better at data; the other is better at language; one
is better at analysis; the other is better at
emotional response. If the AI machine is
unexpected, it seems creative.
Process
To demonstrate the power of ML for creating
smart objects capable of unexpected interactions
with people, I created a set of sculpture pieces
that incorporate digital technology using ML to
predict and control, and occasionally, to
surprise. Sculpture has the connotation of being
inactive, because it usually remains inside a
museum or in a fixed public space. What’s more,
it is usually considered serious and high-brow
due to its association with classical works of art
and intellectualism. I chose sculpture as the
domain of experimentation because I wanted to
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
15
Part I. Full Papers (peer-reviewed)
challenge these two stereotypes about sculpture
by creating pieces that interact instead of being
sedentary, and that exhibit quirky and
unexpected behaviour instead of being profound
and unexciting.
To begin, I observed that ML algorithms start
from the premise of using observable states
coupled with desired outcomes to predict future
observations, using a learning algorithm to
update the network to make the predictions more
accurate. [9] I asked if ML agents are really
making predictions based on observations, how
would a humanoid version that behaves
similarly be interpreted by humans. I made a
hand sculpture that rotates either left or right
using an embedded servo motor. The gesture is
meant to convey the act of “looking” by the
sculpture and prompts the audience to respond
with the same gesture. When a person moves
close to the hand sculpture, the sculpture uses an
ultrasonic sensor to detect the person’s presence,
and turns to face right or left randomly.
However, the distance between the left and right
sides with respect to the sensor is different, so
ML can train it to determine whether the person
is on its left or right and to train itself to adapt to
the sequence of human hand movements. Using
this data, the sculpture learns to predict whether
the next hand motion from the human will be to
the left or right of it, and will move there in
anticipation. The predictions become more and
more accurate over time as data is accumulated
to drive the ML. The algorithm takes the average
of recently detected locations and forms a
maximum likelihood estimate of where the hand
will be next, which is a form of time series
prediction (see https://recfreq.wordpress.com/
portfolio/ai-artistic-intelligence/).
16
Fig 5. Hand sculpture that predicts where the interaction with it
will stem from. The servo for rotation is controlled using a
microcontroller that detects user distance using an ultrasonic
sensor. The learning algorithm predicts future user positions by
keeping track of the averaged time series of previous responses.
Source: Ray LC.
In user tests, I found that it was difficult to
have people continue to interact with the
sculpture to see the effect of the training. The
ultrasonic distance sensor is also occasionally
finicky, making data filtering necessary to
maintain the accuracy of the sensor data for
prediction. Moreover, the sculpture direction
can be randomly correct early on, because there
are only two possible states, so the error rate is
only 50% even without learning. This can mask
the progress that the sculpture makes over time.
However, if observers are committed to
watching the development over time, they can
see the learning undertaken by the ML agent.
Audiences also find the statue engaging,
because plaster hands don’t usually move. The
statues with interactive components were
considered “cute” by some observers. Many
were also surprised by its ability to move, and
those who had the patience to observe its
learning found the adaptability of the statue to
be evocative. The canonical view of an
immobile sculpture was replaced by an
interactive element, which I will continue to
explore in other modalities. Thus, I have shown
that a motorized sculptural piece capable of
learning about its audience can use ML to enrich
its interaction and evoke positive unexpected
responses, contrary to its stuffy classical
stereotype.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Artistic Intelligence. Ray LC (Luo)
Fig 6. A “Star Trek” signalling hand sculpture, fortified by a
raspberry pi running the Google Speech API. It interprets user
voice input and replies, distorting the original meaning. The
user is prompted to press the red button and say anything with
the word “sculpture” in it. The sculpture adapts the words
accordingly. Source: Ray LC.
Next, I wanted to take the unsuspected
sculptural agency idea one step further by
making a talking sculpture that appeared to have
some capabilities of creative speech production.
I used the ML in the Google Cloud Speech API
executed on a raspberry pi as a starting point to
create my own style of machine speech
interface. The audience is prompted to press a
button and say something involving or about
“sculpture.” A computerized voice reply comes
back from the sculpture, which is a plaster
mould of a hand doing the Star Trek Vulcan
“peace and prosper sign.” The Star Trek
reference here is intentional, for it evokes future
technology and thought in a traditional
sculptural form. The peace and prosperity
metaphor also subtly prompts the audience to
talk to the sculpture as if it is a character in a
movie with agency, and evokes the sensibilities
of smart devices that serve human needs and
work cooperatively without conflict, much as
the vulcans in Star Trek operate. Using speech
recognition and custom routines based on the
Google Speech API, which uses ML to
recognize words, I trained the statue to answer
not only repeating what the user says, but saying
it as if it has agency (see video).
Fig 7. A head sculpture that uses computer vision to see where
the user is, and replies using digital code embodied as an LED
matrix that sweeps across the mouth of the sculpture,
representing machine communication.
For example, whenever the user says
“sculpture,” the sculpture replies with a different
noun, which first appears to be referencing the
user. But as the interaction proceeds, it also
changes the pronunciation and verbs, and the
user notices that the sculpture is using the
previous noun to refer to itself, not the user. The
statue is seen to have made a creative
transformation in the user’s view, not by the way
it has changed its interaction style, but in the
way in which the audience discovers what is
algorithmically already there. In user tests, the
only instruction I gave was to tell the users to
say anything they wanted referencing
“sculpture,” but what occurred is that the users
learned more and more about the rules of
engagement undertaken by the statue. One user
said that she thought the statue was subservient
and complimentary at first, but then over the
course of the interaction, it became “sassier.”
What changed was not the rules, but the
potential for the ML agent to surprise (and
annoy) users. The form of the hand gesture as a
Star Trek symbol was key as well, for users say
that they expected the statue to be “high-minded
and calm,” but they actually had a contentious
exchange, in which both user and statue claimed
to be the superior agent. Interestingly, it’s not
the ML part (voice recognition and
understanding) that made the surprising results
possible, but rather the human intervention that
involved swapping the text. Thus, I created a
speech-producing statue capable of surprising
and evoking an emotional reaction from users.
As a final exercise, I wanted to extend the idea
of creative production further than simply
unexpected interactions. I decided to focus on
visual representations after having previously
explored the physical and language arenas.
Although inspired by the ML algorithms for
image association used by Google and Pikazo, I
wanted to situate the piece so that the sculpture
is the agent behind the “deep dreaming”
undertaken by ML agents. Unlike previous
efforts, I wanted to create a physical interface
that appears to be producing the creative output,
so that it’s not a computer using user input to
create modified dreams, but the sculpture itself
which makes content based on who and where
the user is. To evoke the perception of creativity,
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
17
Part I. Full Papers (peer-reviewed)
I gave the machine a human face. Humans are
distinguished by their ability to manipulate and
communicate using language and by their ability
to creatively express themselves. I put both of
these agencies in a traditionally inanimate
sculpture by putting an LED matrix behind the
silicone-based sculpture. The wood-grainembedded silicone retains the form of a classical
statue, but forms a mesh that has hidden within
it the ability to express itself. The LED matrix
appears to respond to human touch due to its
proximity to the silicone layer. Using Arduino to
control the matrix, I created custom animations
that evoked visual creation from the mouth of
the statue when the user’s face was detected by
an attached camera. The animations depend on
where the human face is. I wanted to make a
connection between human speech and machine
data processing. Whereas we can express our
creativity by make speeches, writing novels, or
creating worlds using language, for example, the
machine analogue is not human language as we
know it, but a machine code that we can only
visualize across a layer that blurs
communication. Just as we as 3D beings cannot
contemplate life in 4D, we don’t understand
machine creative processing and the ways it can
express itself as a form different from human
conception. As humans, we can only hope to
visualize the data machine produce across a
layer of uncertainty. Again, it’s not the ML
aspects (computer vision, image recognition,
etc.) that made the sculpture surprising, but the
human intervention that appeared to reveal
machine “thought” using the LED matrix.
Fig 8. The head sculpture lights up when a face is detected, but
also moves its pixels based on where the face is in space. In
this example, the face of the person whose face was cast for the
18
sculpture is detected by the statue.
Users found the silicone face and LED matrix
frightening at first. The red matrix evokes a type
of bloodiness associated with the mouth. They
found the pattern of the matrix display
mesmerizing, because it tends to change form
when they put their fingers on different parts of
the silicone. The computer vision interaction
provides users with a feeling of agency, because
the light comes on only when they are close to
the sculpture, and appears to track their face, a
type of digital productivity. Unlike traditional
sculptures, my piece evokes creative potential
that contrasts with the classical form. One user
said that it reminded him of the way machines
would speak to each other if they were to
communicate, because it “doesn’t say the same
thing twice.” The silicone layer masks the lit up
digital LEDs, so the effect is a filtered view of
what machines would do creatively if they were
creative. In summary, I created a digital machine
metaphor for human creativity, which can be
experienced through a filter established by
classical forms.
Directions
The tools we create are taking over our lives.
From recording our memories onto physical
pages to analyzing the consequences of business
investments; from enabling communication
over long distance to interpreting our speech and
predicting our desires, digital machines enabled
by ML are going from helping us to enabling us
to thinking for us. Will the most unique
characteristic of humans, that of creative
expression, be the next bastion to fall?
Experiments with machine creativity have
centred on using ML to help or imitate the
human creative process. This strategy, however,
is based on an anthropomorphic view that the
way humans express themselves is the basis for
all types of creative works, including those of
machines, much as the Turing Test inherently
situates machines within the human space with
no regard for how non-human processes work.
[10] I proposed that machine artistic expression
can emerge instead from exploiting what
humans think of objects and devices, allowing
ML to subvert traditional forms, coalescing into
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Artistic Intelligence. Ray LC (Luo)
a system of creative expression beyond simply
generating data from modifying previous
models. In this view, the context and situation of
the use of ML is just as important as algorithms,
enabling a world permeated by creative
machines. Indeed, we may be making machine
creative expression possible not by simply
coding it into their algorithms, but rather by
changing the way we think about machines and
how they operate. In short, the more we know
about our tools, the more we learn about
ourselves and our own Artistic Intelligence.
References
1. P. F. Ferrari, S. Rozzi, and L. Fogassi, “Mirror
neurons responding to observation of actions
made with tools in monkey ventral premotor
cortex,” Journal of Cognitive Neuroscience 17,
no. 2 (2006).
2. M. Vancatova, “Creativity and innovative
behavior in primates on the example of picturemaking activity of apes,” NFU Psychology 2,
no. 2 (2008).
3. Anthony Dunne, and Fiona Raby, Speculative
Everything (Boston: MIT Press, 2013).
4. C. D. Hondzel, and R. Hansen, “Associating
creativity, context, and experiential learning,”
Journal of Education Inquiry 6, no 2 (2015).
5. Martin Gayford, “Robot art raises questions
about human creativity,” MIT Technology
Review
(2016).
https://www.technology
review.com/s/600762/robot-art-raises-questions
-about-human-creativity/.
6. Trevor Paglen, “Invisible images (your
pictures are looking at you),” L.A. Times, April
2014.
https://thenewinquiry.com/invisibleimages-your-pictures-are-looking-at-you/.
7. Ray Kurzweil, The Age of Intelligent
Machines (Boston: MIT Press, 1990).
8. Clive Thompson, Smarter Than You Think:
How Technology is Changing Our Minds for the
Better (London: Penguin Press, 2013).
9. D. E. Rumelhart., G. E. Hinton, and R. J.
Williams. “Learning representations by backpropagating errors.” Nature. 1986: 323.
10. Benjamin Bratton, “Outing AI: Beyond the
Turing Test,” New York Times Opinionator,
February
2015.
https://opinionator.blogs.
nytimes.com/2015/02/23/outing-a-i-beyond-the
-turing-test/
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
19
CG-Art: Demystifying the Anthropocentric Bias of Artistic Creativity
Leonardo Arriagada
University of Chile
leoarriagada@outlook.com
Abstract
This aesthetic discussion examines in a
philosophical-scientific way the relationship
between computation and artistic creativity.
Currently, there is criticism of the idea that an
algorithm can be artistically creative. There are
few exponents of the idea that computergenerated art (CG-Art) meets the definition of
creativity proposed by Margaret Boden (2011):
“the ability to come up with ideas or artifacts
that are new, surprising, and valuable.”
Moreover, it has been pointed out that CG-Art is
not fundamentally art, because art is considered
a unique and exclusive human manifestation of
our species. I propose that the denial of CG-Art
as art has an anthropocentric bias. To
demonstrate this, I use recent studies in
cognitive science on artistic creativity to show
that behind the denial of creative artistic
capacity to machines lies a negationist
mysticism of current scientific advances.
1. Introduction
Artificial intelligence (AI) has developed
exponentially since the beginning of the 21st
century. Every day we are surprised by
algorithms that allow machines to perform tasks
previously considered impossible. We receive
shopping recommendations on Amazon, and
reminders of our agenda thanks to Google
Assistant. Car company Tesla has invested
millions in autonomous driving. Such examples
continue to spread. In short, it seems that every
time we exclude something from the domain of
AI, researchers take it as a challenge to
overcome. However, all the tasks mentioned
above are perceived as mechanical, so they can
be modeled mathematically to be executed by a
computer. Our common sense can project the
20
development of AI in the distant future and
imagine that it will be possible to execute any
mechanical task by an application or computer
program. But can a machine create art? Is artistic
creation a mathematically modelable task?
2. Is Creativity a Limit or Goal for CG-Art?
The scenario just described has led
philosophers, cognitive researchers and
programmers to wonder if a machine has the
potential to create. The subject continues to be
discussed, since it requires a certain level of
mathematical modeling of what we understand
by creativity. In this sense, Margaret Boden
(2011, 29) proposed the following definition of
creativity: "the ability to come up with ideas or
artifacts that are new, surprising, and valuable".
I don’t think that, in general, anyone would
completely object to this definition. Thus,
creativity in general must include novelty,
surprise and value. Regarding the particularity
of artistic creativity, I believe that it is precisely
the "value" aspect of the definition that is the
most controversial when analyzing creative
algorithms. I will return to this later, and I will
try to show that denying value to artistic
creations overlooks two important facts: the
mechanical evaluation of artistic work and
robotic embodiment (both topics are addressed
in section 4).
Creative algorithms have undergone fruitful
development thanks to models based on
Artificial Neural Networks (ANN). In
particular, a subtype of these models, Generative
Adversarial Networks (GAN) allowed the
computer program AlphaGo to defeat Lee
Sedol, considered the best human Go player in
the world. This competition was used to show
that GAN effectively created movements that
seem irrational to humans. Therefore, it is no
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
CG-Art: Demystifying the Anthropocentric Bias of Artistic Creativity. Leonardo Arriagada
longer absurd to argue that algorithms can at
least create Go plays that are novel and
surprising. However, is this homologous to
artistic creation? What about the aesthetic
assessment of the creations of machines?
The fundamental point here is that Boden
(2011) defined a special type of art by joining
the concepts "creativity" and "computing".
Thus, CG-Art is understood as "the artwork
results from some computer program being left
to run by itself, with minimal or zero
interference from a human being" (Boden 2011).
There are numerous examples of algorithms that
have been trained to produce aesthetically
pleasing output for human evaluation. AARON,
by Harlod Cohen, and EMI, by David Cope, are
classic illustrations of this type of art. In both
cases the programmers were dedicated only to
improving the algorithms, leaving the creation
up to the software itself. But the general opinion
is that the creations of AARON and EMI are the
authorship of Cohen and Cope, not of the
software itself. I discuss this point in section 6.
But first it is necessary to understand what is
meant by art and why the lay public generally
consider it distinct from mathematical
modeling.
3. Is Mystical Inspiration the Only
Explanation for Artistic Creativity?
We have seen that the GAN can create a novel
and surprising play. But a third characteristic is
still missing to satisfy Boden's definition of
creativity: "value". I do not analyze here the
value of AlphaGo's creations. My subject of
investigation is algorithms for creating art, so
the value I refer to here is the aesthetic type. I
think that if it is already controversial to say that
AlphaGo creates plays, it is problematic to
affirm that machines can deliver output of
aesthetic value. In this regard, Aaron Hertzmann
(2018) points out:
"The concepts of art and inspiration are often
spoken of in mystical terms, something special
and primal beyond the realm of science and
technology; it is as if only humans create art
because only humans have "souls." Surely, there
should be a more scientific explanation
Hertzmann, despite rejecting the idea that a
computer program can create art, forces us to
question our concept of art. In fact, I agree that
most artists are reluctant to believe CG-Art is
art. But the arguments they use ultimately appeal
to the mystical qualities of "talent" and
"inspiration". It is not my goal to refute the
mystical vision of art that many artists share. I
understand that it is a matter of faith and
therefore, impossible to refute. Of course, there
cannot be a mathematical modeling of this
concept of art either.
Considering the above, I will dedicate myself
to investigating aesthetic aspects of algorithms
or machine creations. Even so, I point out that a
very simple objection to the lack of a mystical
connection in CG-Art is that it is different from
human art. Thus, although human artists may
choose to believe in mysticism, potential
computer artists do not have to submit to this
requirement. I consider it much more productive
to study the aesthetic value of CG-Art without
appealing to concepts such as "talent".
4. The Aesthetic Value of CG-Art. Two
Approaches: Human and Mechanical
Evaluation
Next I will examine how CG-Art can effectively
produce novel and surprising works. In effect,
this can be achieved by random combinations.
This is not a topic that I will delve into in this
text. But I postulate that the most debatable
characteristic of CG-Art is its aesthetic value. I
will show that CG-Art does meet this
requirement through two approaches, one
focused on human evaluation and the other on
machine evaluation.
4.1 Human Evaluation of the Aesthetic Value
of CG-Art
Recently, people’s perceptions of CG-Art were
evaluated. In the article "Putting the Art in
Artificial: Aesthetic responses to computergenerated art" (Chamberlain et al. 2017), the
researchers studied how human observers
respond to artworks generated by computers and
by humans. The findings indicate a negative bias
towards CG-Art. Predictably, the works in
which the CG-Art expressed plastically
representational features were qualified as more
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
21
Part I. Full Papers (peer-reviewed)
artificial than abstract works. In the same way,
the observers valued imitations of brush strokes
and small imperfections in CG-Art works more
highly.
Chamberlain et al. (2017) verified that this
negative prejudice towards the aesthetic value of
the CG-Art diminishes when the observer can
see the production of the work. This led them to
suggest that increasing the anthropomorphic
characteristics of a robot could tend to eliminate
hostility towards CG-Art. Indeed, it seems that a
human observer expects to see artists working
on their artwork. The "black box" model, in
which only the printed output of an algorithm
can be seen, moves away from the current
human vision of artistic creation. The simple
fact of seeing a robotic arm painting on a canvas
increases the observer's empathy. Chamberlain
et al. (2017) postulate that this may be the result
of the activation of mirror neurons in the human
brain.
If the "black box" model in which CG-Art
works is a handicap for its aesthetic value, it is
interesting to think what would happen if we can
overcome it. Unfortunately, we still do not have
the technology to create a robot of the
complexity that the Westworld series (2016)
invites us to imagine. However, we can
overcome this handicap by presenting human
works and CG-Art without telling the observer
which one is which. It is precisely this aspect
that is investigated in the article "CAN: Creative
Adversarial Networks Generating "Art" by
Learning about Styles and Deviating from Style
Norms" (Elgammal et al. 2017). Through a
GAN modification, the researchers developed
Creative Adversarial Networks (CAN).
Basically, the algorithm was optimized so that it
was not dedicated just to emulating human art
styles, but it was really creative. This point is
developed in section 4.2.
The findings in this study showed that humans
assign a higher score to the CG-Art created by
CAN, surpassing a sample of Abstract
Expressionism at premier art show Art Basel in
2016. The participants were asked to assign a
score of 1 to 5 for the qualitative indicators of
intentionality, visual structure, communication
and inspiration. In each of the items the CG-Art
was given a higher score than human art.
22
In conclusion, I postulate that in a blind test
CG-Art has aesthetic value for humans.
However, artworks by human artists receive
constant aesthetic appreciation. So far we have
discussed only the external valuation of
observers. Can an algorithm aesthetically
evaluate its own art?
4.2 Mechanical Evaluation of the Aesthetic
Value of CG-Art
When Harold Cohen wrote the computer
program AARON, which he designed to
produce art autonomously, he filtered the output
that seemed aesthetically valuable to him. Since
then, algorithms based on artificial neural
networks (ANN) have progressed considerably.
As I mentioned earlier, the GAN subtype is the
most widely used today. I explain below that in
a GAN network (and of course in a CAN
network), an aesthetic evaluation is performed
by the same algorithm.
First, we need to understand how a GAN
network works.
According to Elgammal et al. (2017, 5), "a
Generative Adversarial Network (GAN) has two
sub networks, a generator and a discriminator.
The discriminator has access to a set of images
(training images). The discriminator tries to
discriminate between "real" images (from the
training set) and "fake" images generated by the
generator. The generator tries to generate
images similar to the training set without seeing
these images. The generator starts by generating
random images and receives a signal from the
discriminator if the discriminator finds them real
or fake."
This dual model has aesthetic assessment
incorporated into it. In effect, when the
"discriminator" is deceived by the "generator",
the GAN reaches an aesthetic value similar to
that of the original set of training images that
was provided to it. It follows from this that a
GAN does not create art, but rather emulates
artistic styles. Fortunately, the CAN subtype has
been created specifically to move away from
imitation and achieve authentic creation by an
algorithm. As explained by Elgammal et al.
(2017, 13), in this modification of the GAN, the
discriminator gives the generator two signals: a)
the classification of art or non-art, and b)
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
CG-Art: Demystifying the Anthropocentric Bias of Artistic Creativity. Leonardo Arriagada
correspondence to a specific artistic style. In this
way, "the proposed CAN model generates
images that can be characterized as novel and
not emulating the art distribution, however
aesthetically appealing."
Therefore, the statement by Hertzmann (2018,
19) that "unlike human artists, these systems do
not grow or evolve over time" does not seem
justifiable. A CAN is capable of creating and
evaluating on its own. I suggest that these
capabilities allow us to affirm that a CAN does
grow and evolve aesthetically.
5. CG-Art and Society
The final criticism that Hartzmann (2018) makes
of CG-Art is that art is social, and since
computers are not "social agents", they cannot
create art. I will discuss this briefly with two
answers that I think are pertinent to analyze and
develop in the future.
First, Hartzmann seems to forget that he is
talking about CG-Art. Although it seems
indisputable that art created by humans is social,
in the terms stated in Hartzmann (2018), this
does not mean that CG-Art must be social. It is
analyzing an algorithm. The way in which
algorithms relate is not a subject widely studied
even in sociology. In my opinion, we cannot
conclude that a computer cannot create art
because it is not a social entity. Obviously, an
algorithm doesn’t have the same kind of
experiences a human has. The point here is to
see if you can create art, not if you can create
human art. The latter is not possible at present.
Perhaps in the future, with an algorithm
implanted in an anthropomorphized body, social
interactions can be achieved that will allow this
condition to be fulfilled.
But there is no aspiration for CG-Art to be
considered human. Their ways of knowing and
experiencing are different. CG-Art is
fundamentally based on Big Data, which is
actually the most social thing we have, since it
shows patterns of social behavior. Therefore, it
is not surprising that in blind tests CG-Art is
valued aesthetically, since it is based on a small
sample of Big Data. I postulate that if we are
optimistic and wait for the Big Data used by the
CG-Art to be extended, we will have aesthetic
works never thought of by humans.
6. Collaboration. Authorship. Apprentice
and Teacher. Codes and Laws.
"To date, there is a rich body of computergenerated art, and in all cases, the work is
credited to the human artist (s) behind the tools,
such as the authors or users of the software – and
this might never change. "(Hertzmann 2018, 2)
A final objection to CG-Art is related to the
authorship of the artworks. Many postulate that
the real authors of the artworks of a machine are
the programmers of their code. In my opinion,
this is incorrect for two reasons.
First, and as Hertzmann himself (2018)
recognizes, human artistic work is social.
Therefore, it involves many agents. It is not
necessary for only one of them to be considered
an artist, since the agents may fulfill different
functions. Let me give an example to clarify this
point. When a film is shot we have at least a
director and actors collaborating artistically.
Both functions hybridize and complement each
other. We cannot say, for example, that the
artwork "film" is the creation only of the director
and not the actors. In effect, we say that the
director fulfills the artistic function of
"directing" and the actors of "acting". In both
cases, art has been created and a film, which is
an artwork in itself, has been created.
CG-Art invites us to think about a new art
form, much more current, which underlies the
collective creation. It can be argued that the
programmer and the algorithm are the artists.
This is so, I postulate, because unlike working
with a brush, the algorithm acts as a kind of
creative agent, or a colleague.
Second, I propose an analogy in which a
creative algorithm is to its programmer as
human apprentice is to a human master. If we
assume that human art is social, then we can
understand that there is no artist who has not had
a teacher. This role of teacher can be exercised
by an expert, who explicitly teaches, or by
experiences lived by an artist, without being
attributed to any human being in particular. In
both cases, the artwork was nurtured by prior
learning.
CG-Art is also based on learning. Its teacher
may be its programmer, another algorithm, a
sample of artworks, and so forth. Learning from
an agent does not prohibit CG-Art from creating
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
23
Part I. Full Papers (peer-reviewed)
its own art. In the same way, human apprentices
do not have to grant authorship of their work to
their teachers.
Finally, there are criticisms of CG-Art based
on the fact that an algorithm is a code and
therefore cannot create because it follows rigid
rules. But everything that exists follows
inviolable rules. For example, neither CG-Art
nor a human artist can violate physical laws.
That is a real limitation for both types of art.
Also, we all have a code that we follow. The
computations performed by a CAN are complex
instructions written by programmers at first, but
then by the same algorithm throughout their
learning. In the case of humans, we all develop
genetically according to our DNA, which is a
code we are born with. No artists would feel
limited by having to respect physical laws and
being forced to develop according to their
genetic code. These criticisms seem worth
investigating and developing to clarify these
points with the lay public.
Elgammal, Amhed, Bingchen Liu, Mohamed
Elhoseiny, & Marian Mazzone. “CAN:
Creative Adversarial Networks Generating
"Art" by Learning About Styles and Deviating
from Style Norms.” arXiv:1706.07068v1,
2017.
Hertzmann, Aaron. “Can Computers Create
Art?” Arts, 2018.
7. Conclusion
This article examined the relationship between
computation
and
artistic
creativity
philosophically and scientifically. It argues that
CG-Art is a new art form and that most of its
criticisms are made from an anthropocentric
viewpoint. CG-Art is not human art and is not
intended to be. The works of CG-Art satisfy the
criteria of novelty, surprise and aesthetic value.
Moreover, in the face of blind tests, human
observers consider CG-Art to be more creative
than art created by humans. I therefore consider
that greater analysis of CG-Art will allow us to
broaden our aesthetic conception of what art is,
as long as it is studied without prejudice.
Bibliography
Boden, Margaret Creativity and Art: Three
Roads to Surprise. New York: Oxford
University Press, 2011.
Chamberlain, Rebecca; Caitlin R Mullin, &
Johan Wagemas. “Putting the Art in
Artificial: Aesthetic responses to computergenerated art.” Psychology of Aesthetics
Creativity and the Arts, 2017.
24
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with
Deep Recurrent Nets for Text Generation
Sofian Audry
School of Computing and Information Science
University of Maine, Orono, ME, USA1
sofian.audry@maine.edu
Abstract
Machine learning has traditionally focused on
problem-solving and optimization. But
contemporary conceptions of art usually
describe art as non-purposeful and nonoptimizable. In this paper, I propose an
alternative approach to using machine learning
for artistic creation by using the training phase
itself as a generative process of new aesthetic
forms. Contextualizing my approach within
media art history and the history of artificial
intelligence, I describe a series of experiments
performed using this approach using Long
Short-Term Memory (LSTM) recurrent neural
networks applied to text generation.
Introduction
Machine learning has recently become a popular
approach for studying artistic creativity and
creating new forms of art. Oftentimes, this
requires framing the creative process as a
problem to be solved using some form of
optimization. For example, such approaches
have been used to evolve new 3D creatures
based on subjective preferences; [1,2] to
generate music scores that “sound like” the
dataset they have been trained on; [3,4] to
transfer a painter's style onto another
painting; [5] and even to generate images that
often feel “more artistic” (at least to the layman)
than those of contemporary painters. [6]
Indeed, machine learning is designed to
recognize regular patterns, and when employed
for generative purposes, is attuned to
reproducing things that already exist. Artists, in
1
contrast, seek to create the unexpected.
Optimization is inherently dichotomic to artistic
practice. Studies that try to tackle artistic
production as an optimization problem are
immediately faced with problems such as the
existence of multiple maxima (e.g., there is no
such thing as “the best movie” or “the best
painting”); the possibly infinite and
incommensurable domains in which artworks
exist; and the fact that art is often precisely
described as non-purposeful and nonoptimizable. [7,8]
In this paper, I explore an approach to
computational art that uses the optimization
process of machine learning algorithms as a raw
material. This technique unrolls the iterative
steps in the training phase, thus revealing the
temporal structure of the learning agent's
behavior. I examine one particular set of
experiments that was conducted using this
technique, involving a deep learning model
known as a long short-term memory (LSTM)
recurrent neural network, trained on a text
database. The creative artistic and technical
approach is presented, as well as the outcomes.
Finally, I discuss the implications of the work in
the field of computational media art.
Context
Machine learning finds its origin in cybernetics,
a disruptive science that impacted not only
computer science and artificial intelligence, but
also
biology,
neurology,
sociology,
anthropology, and economics. Furthermore, it
had a profound impact on art in the 1960s, and
This research was initiated and conducted as part of my postdoctoral studies at the Comparative Media Studies/Writing, Massachusetts Institute of Technology, Cambridge, MA, USA.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
25
Part I. Full Papers (peer-reviewed)
foreshadowed the later development of new
media art.
One of the central concepts of cybernetics was
that of systems or agents, some of which, using
feedback from their environment, were able to
adapt over time by trial and error. [9] This very
basic concept of an agent iteratively and
incrementally adapting to its environment by
adjusting its own structure is at the core of deep
learning, which is based on layers of densely
interconnected agents, called neurons, which
work together to achieve a greater, more
complex level of agency at the global scope. In
current deep learning applications, these
millions of agents are force-fed gigabytes of
data, resulting after several iterations in the foie
gras of the deep learning revolution: fully
optimized models often performing above
human level.
Since the 1950s, many artists have exploited
the adaptive features of cybernetics systems and
other learning agents, not by applying optimized
models, but by exploding the learning process
itself, often running it in real time. Consider, for
example, Hungarian artist Nicolas Schöffer’s
piece CYSP I, which was directly inspired by
Norbert Wiener’s theory of control and
communication. [10, p. 472] Or Karl
Sims’ Galápagos (1997), in which visitors are
asked to select their favorite artificial 3D
creatures in a virtual environment, and where the
selected creatures’ genetic code is then used to
create the next generation using genetic
algorithms. Performative Ecologies (2008—
2010), by architect Ruairi Glynn, is another
example. Inspired by the work of Gordon Pask,
especially his 1968 installation Colloquy of
Mobiles, Glynn’s installation creates a
conversational space in which dancing robots
evolve in constant interaction with one another
and with the public.
Most of my own work over the past decade
has focused on the design of computational
artificial agents, and documenting the
performance behavior of these agents in the real
world. For example, in my series of site-specific
interventions Absences (2008-2011), I created
small, autonomous, ephemeral agents that acted
within natural environments, such as forests and
mountains.
My
robotics
installation
26
Vessels (2010-2015), created in collaboration
with Samuel St-Aubin and Stephen Kelly,
involves a group of autonomous, water-dwelling
robots that react collectively to their
environment through an emerging group
behavior. Through this earlier research I
developed an interest in how self-organizing and
adaptive processes impact both artistic practice
and the viewer’s experience. Hence, in Vessels,
a genetic algorithm procedure is used to allow
robots to collectively converge to a common
group behavior. A similar mechanism has been
explored by Stephen Kelly in his work Open
Ended Ensembles (2016), in which two agents
use genetic programming (GP) to move along a
fluorescent tube.
Artist and media theorist Simon Penny calls
these kinds of works “embodied cultural agents”
or “agents as artworks” and integrates them
within the larger framework of an “aesthetic of
behavior”, a “new aesthetic field opened up by
the possibility of cultural interaction with
machine systems”. [11] These works are distinct
from so-called generative art, which uses
computer algorithms to produce stabilized
morphologies, such as images and sound: their
aesthetics are about the performance of a
program as it unfolds in real-time in the world
through a situated artificial body.
In my past work, I developed an ontological
framework of behaviors by looking at the
distinctive way behavior morphologies unfold
over time. [12] While existing taxonomies of
cybernetics systems have focused mainly on
their relational and structural aspects, I look at
the temporal dimension of agent behaviors and
its aesthetic potential. [13,9] In particular, I
hypothesize that adaptive behaviors are
distinguished from non-adaptive behaviors by
their ability to change over time and therefore
belong to a “second order” of behaviors – those
whose behavior evolves over time. With that in
mind, we can start considering how the shape of
a behavior emerges from randomness
(morphogenesis), transforms over time
(metamorphosis),
or
remains
stable
(morphostasis).
Using this framework, we can establish that
most learning algorithms go through a phase of
morphogenesis, during which their behavior
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with Deep Recurrent Nets for Text Generation. Sofian
Audry
changes, until they eventually stabilize in a final
stage of morphostasis. I posit that this process of
transformation and stabilization is artistically
relevant and can be harnessed as a creative
method.
Fig. 1: Schematization of the temporal evolution of an adaptive
behavior. Distance along the vertical axis represents difference
in the form of observable events produced by the agent. The
graphic shows how second-order, adaptive behaviors iteratively
change over time through a process of morphogenesis, until they
stabilize into an optimal first-order behavior, thus entering the
phase of morphostasis.
Approach
In this research, machine learning is used to
generate new forms of behavior. Following
cybernetician Gordon Pask, we define a
behavior as a stable form of events caused by an
agent, as perceived by an external observer. [14,
p. 18] This work fits within the larger artistic
discipline of agent-based art – what artist Simon
Penny calls “behavior aesthetics”. These works
engage the performance of one or many
synthetic agents as they unfold temporally in the
world through situated artificial bodies. [11,
398] Such works are distinct from so-called
“generative art” or “algorithmic art”, which use
algorithmic processes not as an end, but as a
means to produce stabilized morphologies, such
as images, sound, and text. [12]
This study involves a series of artworks in
which LSTM recurrent neural networks were
trained on a single text corpus: a version of
Emily Brontë's novel Wuthering Heights,
adapted from the Gutenberg online library. 2
2
3
http://www.gutenberg.org/cache/epub/768/pg768.txt
The source code used in this project is available
here: https://github.com/sofian/readings
Snapshots of the trained models were saved on
disk at different steps in the learning process,
resulting in a set of increasingly optimal models.
These models were then used as part of a
generative process to create a new text.3
The first artistic output of that approach, for
the sleepers in that quiet earth, takes the form of
an artbook printed as a series of 31 unique
copies,4 each of which has 642,746 characters –
the same length as the version of Wuthering
Heights that was used for training the neural
network. Each copy is generated by a deep
learning agent, known as LSTM, trained on the
book. LSTM recurrent neural networks are a
kind of artificial neural network with recurrent
connections, which can “learn” from sequences
of data, such as words and characters. They are
used in state-of-the-art language processing
applications, such as speech recognition and
automated translation.
The result is a unique record of the agent as it
reads the book and learns the probability
distribution of characters, thus somehow
becoming increasingly “familiar” with its syntax
and style, while at the same time becoming
more and more complex in its generative
features. This unicity is important, because I see
the work less as a trace of the agent's behavior
than as a way to experience its behavior as if it
were happening in real time.
Like many other deep learning systems,
LSTM agents are both predictive and
generative. In most scientific applications, it is
their predictive capabilities that people are
interested in. For example, in machine
translation, deep learning systems of the LSTM
type are used to compare the probability of
different candidate translations and keep the one
that is more likely.
Another unique feature of deep-learning
systems is that unlike other AI approaches, they
improve iteratively. Starting from nothing, as
they become more and more exposed to data,
they improve and become better at prediction,
which also directly impacts their generative
capabilities, if they have any.
4
The work is published at Bad Quarto. Editor: Nick
Montfort.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
27
Part I. Full Papers (peer-reviewed)
These two ideas – generation and adaptation
– form the basis of for the sleepers in that quiet
earth. My intention in this work was not so
much to produce an accurate “optimal” system
that could generate rich, human-level, grammarcorrect sentences. Instead, I sought to allow the
hesitant, strenuous learning process of the
system to reveal itself as it goes through all of
its sub-optimal states of being.
Another key conceptual dimension of the
work resides in the ability of the agent to be both
a reader and a writer. If we picture the text
of Wuthering Heights as the “world” in which
the agent dwells and tries to make sense of by
“reading” sequences of characters, then as it
becomes more familiar with its environment, it
is also able to “write” new sequences, which can
give an insight into the agent's understanding of
its world. The performance trace of this agent is
made concrete in the archetypal object of
authorship: a book.
I decided to distribute only a printed version
of this book, not a digital version. This aspect of
the work is crucial, as it lends a physical
materiality to the agent and confers an identity
beyond its abstract virtual existence. The
artbook format contributes to the hybrid nature
of the work, combining visual arts, electronic
arts, and electronic literature.
The second output of the project is a series of
two sound-art pieces and one performance
realized in collaboration with Erin Gee5. These
works explore different modes of revoicing texts
generated by the algorithm, using a technique
known as Autonomous Sensory Meridian
Response (ASMR), which involves the use of
sonic “triggers”, such as gentle whispering, or
fingers scratching or tapping, to induce tingling
sensations and pleasurable auditory-tactile
synaesthesia in the user. The phrases of the
soone and to the sooe are variations on the
incremental learning process used in for the
sleepers in that quiet earth, but using a shorter
text generated by a simpler model. Finally, the
5
6
28
https://eringee.net
As a point of comparison, consider the difficulty of
learning how to write a book in an language
unknown to you, with the only information being a
single book written in the language.
work Machine Unlearning reverses the process
as part of a live performance, in which Gee reads
a generative text that starts with the fully trained
neural network and slowly regresses to
randomness.
Preprocessing
Wuthering Heights contains a few more than
600,000 characters, which is rather small
compared to state-of-the-art language modelling
datasets, which usually contain several million
characters. 6 Starting with an open-access
version of Wuthering Heights. [15] I slightly
reduced the complexity of the learning task by
reducing the number of different characters
encountered, by (1) making all the letters
lowercase (so that the agent does not need to
distinguish between uppercase and lowercase
letters); and (2) removing low-frequency
characters such as parentheses, which appeared
only a few times in the text and would only
confuse the agent.7
Training
To produce the work, an LSTM was trained on
the complete text of Wuthering Heights 8 over
many iterations. Snapshots of the agent's
weights were saved at different steps in the
learning process, from the beginning, where it is
initialized randomly, to the end, after it has read
the book 150 times.
Learning was asymptotic, with many changes
happening during the first steps of training. This
resulted in the system appearing already “overly
trained” after the first epoch. To compensate for
this, I saved 200 snapshots during this first runthrough using mini-batches of different sizes
(Fig. 2).
7
8
The preprocessed version of the text which was used
as the training set is available here:
https://github.com/sofian/readings/blob/master/data/
wuthering.txt
Some basic preprocessing was done to the text, as I
explain later.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with Deep Recurrent Nets for Text Generation. Sofian
Audry
Fig. 2: Training loss (categorical cross-entropy) plotted against
(a) the training epoch for the first 75 epochs, and (b) the saved
model number up to the first 75 epochs. These graphs show how
the process of saving models during the first epoch flattened the
learning curve, allowing for more fine-grained evolutions
during the generative step. Notice that the first 200 saved models
happened during the first epoch alone.
These 351 snapshots – one in the starting
state, 200 during the first epoch9, and 150 (one
per epoch) for the rest of the process – were then
used in a generative fashion to produce each
version of the work. Each snapshot was used to
generate an approximately equal portion of the
642,746 characters in the book.
The way the LSTM was trained helps
understand its behavior during the generative
phase. The network modelled the distribution of
sequential text patterns by estimating the
conditional probability of the next character xi
given the past N characters hi = xi-N … xi-1:
P(xi|hi)
This probability distribution is represented by
a function that produces one probability value
for each possible character. For example, let us
say that the N=10 previous characters seen by
the agent are “wutherin”. After training, we
would expect the agent to emit a high probability
P(g|wutherin) for the letter g (wuthering), a
lower probability
P(’|wutherin) for a single
quote (’) (wutherin’), and near-zero probability
for every other character.
The network can then be used to generate new
sequences, simply by sampling randomly using
the distribution and repeating the procedure. To
get back to our previous example, after choosing
the letter g, the agent would sample a new
character, this time using the input “uthering” –
in which case we would likely expect a high
9
probability of s, a white space (_), and other
punctuation marks (.,?!).
This kind of statistical approach, which looks
at the previous N units in a sequence, is known
as the Markovian process, which is very
common in natural language processing. [16]
One of its limitations is that it makes the
assumption that the closest elements in the past
are the most important for predicting the future,
which is an imperfect premise to say the least,
especially when it comes to language, where
there are often very long-term dependencies.
This explains to a large extent why the sentences
generated by the agent, even in the later stages
of training, are somehow detached from one
another, as the neural network fails to grasp
long-term dependencies between sentences.
To model this probability distribution, I used
an LSTM network with two layers of fully
interconnected hidden units with 200 neurons
each. Input streams were sent by chunks of 100
characters using a sliding window (N=100).
Input characters were represented using
embeddings, a technique in which each symbol
is represented by a vector, which is itself trained.
For example, in this work, I used embeddings of
size 5, which means that each character is
represented by 5 different values. These values
can be seen as a representation of different
characteristics of each character that can be
useful for the system to make better predictions
over sequences. For example, the first value
might represent whether the letter is a vowel,
and the second value whether it is a punctuation
mark. [14]
Generating
After the training, I obtained a series of
probability distributions at different stages of the
evolution of the model, which were then used to
generate each book.
Let f(x|h, θ) be the output of the LSTM for
character x, given the N past characters h and the
set of weights θ.
The probability distribution is represented by
the LSTM using the following softmax function:
In machine learning jargon, an epoch corresponds to
one full iteration over the training dataset – in this
case, the complete novel.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
29
Part I. Full Papers (peer-reviewed)
where Vn=Vn(h,θ) denotes the set of n characters
x with the largest value fθ (x|h).
where V is the set of all possible characters (i.e.,
the vocabulary).
Here the hyper-parameter ∈ [0,∞] is called
the temperature and is typically set to 1. Raising
the temperature spreads out the probabilities,
making them more uniform, while lowering it
makes the distribution peakier, thus making the
agent even more likely to choose the letter with
highest probability.
Temperature Adjustment
After some experiments, I noticed that the
probability distributions in the early stages were
“spread” too much across the characters (i.e.,
there were not too many differences between
each probability) and that the agent would thus
generate text that appeared “too random” for my
taste. I therefore decided to slightly adjust the
probability distribution to make it more “peaky”
by decreasing the temperature
– thus
effectively heightening the probability of the
most probable elements and decreasing the
probability of the others.
However, this approach seemed too “greedy”
in later stages, in which the agent became
complex enough to consider different sequences
of construction and completion. Thus, as the
agent’s training progressed, I adjusted the
probability distribution to be more “spread-out”
to encourage diversity (Fig. 3).
Shortlist
Still, since no character had zero probability,
there were always cases in which the agent
would accidentally generate a completely
arbitrary character. To limit this phenomenon
while allowing variety, I forced the agent to
choose among only a shortlist of the n most
probable characters. So the final probability
distribution is as follows:
Fig. 3: Evolution of temperature ( ) throughout the bookgeneration process.
Transitions between Models
Finally, to allow for smooth transitions between
each block of text generated by each model, in
the last part of each section, I interpolated the
probability distributions of the current model
and the next model to generate each character.
This was parameterized by a transition factor
∈ [0, 1], representing the point of transition in
each block at which I start interpolating. To
generate for the sleepers in that quiet earth, we
used
; therefore, the last 20% of each of
the 351 blocks of text (each averaging 1833
characters) was obtained by linearly
interpolating the current probability distribution
and the one of the next trained model.
Postprocessing
The final production of the artbooks for the
sleepers in that quiet earth involved an
additional step. Through discussions with editor
Nick Montfort, we implemented a few minor
changes to convert the raw generated text into
book format. For instance, we interpreted the
appearance of the word “chapter” followed by
roman letters in the generative text (eg. “chapter
xix”)10 as an indication of a new chapter, which
we therefore formatted differently with a page
break and bold typeface.
Results
This section discusses the results of the
generative process through an in-depth
10 Notice that these appear randomly. For example,
“chapter xi” might appear before “chapter iii”.
30
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with Deep Recurrent Nets for Text Generation. Sofian
Audry
examination of an unpublished version of for the
sleepers in that quiet earth. In this section, I
describe the progress of the agent as it runs
through the reading in terms of time. Here
“time” is understood in terms of character
position and is represented by the symbol t.
There are 642,746 individual characters in the
original text. So for example, at time t=64,274
the agent is about 10% into the book, and at time
t=321,373 it is halfway through.
Morphogenesis
The behavior of the writing agent throughout the
learning process manifests itself in a number of
different ways, corresponding to the state of the
agent as it becomes more and more attuned to
the “world” it lives in – that is, the text it is
reading. As is traditionally done, the neural
network was initialized with random weights,
representing a neutral state. At this point, the
agent had not been subjected to any observations
and therefore, had no understanding of the
world. Accordingly, in the first few pages of the
book, the agent behaved completely randomly,
as it had been initialized with random weights.
The agent then proceeded to read the book one
character at a time to build an internal
representation of how character sequences are
generated in Brontë’s novel – in other words, by
building a model of the author’s style. In so
doing, it learned more and more about the
author’s style as it read, starting with building a
comprehension of sequences at the character
level and incrementally building from this to
groups of two, three and four characters,
forming syllables, then words, and finally
complete sentences.
Following is a case study of a particular
unpublished “reading” of the book, and thus
construction of an LSTM agent. Here is an
excerpt of the first “sentence” generated by the
agent:
Excerpt at t=0
Early on in the training (after reading a few
characters), the agent started to utter erratically
some of the characters it had seen:
Excerpt at t=40
Later on, when it had seen more, it became
obsessed with white spaces and frequent
characters such as the letter “e”.
Excerpt at t=530
These fixations can be explained through the
probabilistic approach governing the system.
More frequent characters simply have a higher
probability of appearing in the text. For
example, imagine yourself pointing to a random
character in a book and trying to guess what it is
without any context; you would likely have a
higher chance of making the right guess if you
chose a white space than a character.
After reading a few hundred characters, the
letters produced by the neural net became more
condensed, and we saw appearing some
character duplicates. These were the early steps
of the agent moving from merely counting the
frequency of characters as a predictive
measurement. After it read about 5% of the
book, the letters became more condensed and
the agent even started to tentatively concatenate
frequent letters:
Excerpt at t=33,490
The Glitch
Surprisingly, not long after this point, the agent
seemed to regress to an earlier stage and started
behaving erratically for a while. This event
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
31
Part I. Full Papers (peer-reviewed)
happened in only one specific case. I have not
been able to replicate this or explain the reasons
for this glitch, despite several attempts.
Excerpt at t=59,410
Excerpt at t=43,090
My best explanation is that this was due to an
early attempt by the neural network to make
sense of double-quotes (“”), which is one of the
hardest mechanisms to understand for a neural
network, as it involves looking backwards to a
previous point in the sequence – as opposed to
learning about syllables, which involves looking
back only one or two characters.
This, as well as the presence of tentative
sequences of double-quotes in the next few
learning steps, give a hint in this direction –
although I was not able to verify it with
certainty. Importantly, whereas I ran several
training procedures to produce the work, tuning
the model and the training procedure, this
“glitch” appeared in only one of these
experiments. Even a slight modification in the
training data, such as removing the chapter titles
at one point, prevented the appearance of the
glitch. Since I thought this was such a
fascinating accident, I decided to work with the
specific experiment that produced it.
Morphemes and Proto-Words
Not long after resolving the “glitch”, the agent
eventually relaxed its generation of spaces. It
seemed to have finally learned one of the most
basic principles of English language: the
separation of groups of letters using individual
spaces. From this point on, it started to
tentatively build morphemes of increased
length, separated by a single space. Sequences
were first limited to a series of one, two or three
of the most frequent characters.
32
Soon the agent started combining more
diverse groups of letters. Short words even
started appearing.
Excerpt at t=113,170
This was shortly followed by early attempts to
build short sequences of words, some of which
were even correct English, such as “in the”, “that
is”, “the mind” and “the mister”.
Excerpt at t=215,570
Punctuation and Sentences
After reading about a third of the book, the agent
started using punctuation. For example, here is
the first use of commas:
Excerpt at t=227,090
At about two thirds through the book, the
agent could construct sentences of varying
length, making syntactically appropriate use of
periods, commas, and quotes. The sentences
were mostly nonsensical and grammatically
imperfect. Yet they seem to mirror some of the
core aspects of the original text, including the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with Deep Recurrent Nets for Text Generation. Sofian
Audry
use of the first person, an abundance of dialogue,
and the construction of long sentences with
many complementary clauses, a style that was
common in 19th century English literature.
Above all, it was the rhythmic qualities of the
text produced by the artificial agent that bore the
closest resemblance to Brontë’s style.
Excerpt at t=448,530
For comparison, consider this excerpt from
Chapter VIII of Wuthering Heights:
I guess she is; yet she looks bravely,’ replied the girl,
‘and she talks as if she thought of living to see it grow
a man. She’s out of her head for joy, it’s such a beauty!
If I were her I’m certain I should not die: I should get
better at the bare sight of it, in spite of Kenneth. I was
fairly mad at him. Dame Archer brought the cherub
down to master, in the house, and his face just began to
light up, when the old croaker steps forward, and says
he—“Earnshaw, it’s a blessing your wife has been
spared to leave you this son. When she came, I felt
convinced we shouldn’t keep her long; and now, I must
tell you, the winter will probably finish her. Don’t take
on, and fret about it too much: it can’t be helped. And
besides, you should have known better than to choose
such a rush of a lass!” [15]
Improvements
This is an excerpt after one epoch of training –
that is, after the agent had read the book once.
At this point the agent had learned to generate
complete sentences, with a few glitches. Many
of these sentences are still grammatically
incorrect and somewhat random. It is as if the
agent could only “see” two or three words in the
past, with usually only short sequences of two or
three words making logical sense together.
Consider for example the progression in the
following sentence generated after the first
epoch:
Excerpt at epoch 1
From this point forward, the neural network
was trained for several epochs, having re-read
the novel up to 150 times. Changes in the agent’s
output become less perceptible over these later
iterations. The first epoch allowed the agent to
grow from pure randomness to building
morphemes, words, and full sentences with
punctuation. In the following iterations, the
agent seemed to expand these basic building
blocks by (1) polishing grammar, (2) expanding
vocabulary, and (3) diversifying the length and
structure of sentences, including producing
dialogic constructs that are common in the
original text.
To get a sense of this evolution, here are some
sample sentences from epochs 20, 80, and 150,
which may give a sense of the transformation in
the agent’s behavior.
Excerpt at epoch 20
Excerpt at epoch 80
Excerpt at epoch 150
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
33
Part I. Full Papers (peer-reviewed)
Machine Unlearning
Proceeding incrementally using models of
increasing accuracy is not the only way the
suggested method can be used. In Machine
Unlearning,11 artist Erin Gee performs using a
voice technique known as Autonomous Sensory
Meridian Response (ASMR). She reads a text
which was generated using the inverted process
presented above. Here we simply regress from a
fully optimized system down to an untrained
model.
Following is an example of such a text, which
was read by Gee during the work’s premiere in
May 2018:
The generative text read in Machine Unlearning (2018).
Conclusion
The computational artworks described in this
paper span diverse approaches, such as
electronic literature, generative art, and behavior
aesthetics. They make use of deep learning
recurrent neural networks, not so much as a way
to generate novel and creative writing by taking
advantage of the system’s ability to imitate
human performance, but to reveal the learning
process of the system. In other words, the
approach explored in this study subverts the core
purpose of artificial intelligence, whose aim is to
reproduce or exceed human performance, in this
case, by imitating the style of a well-known
English author. Instead, it focuses on the
behavior of the artificial agent as it tentatively
tries to achieve its goals.
Rather than focusing on the literary prowess
such computational systems can achieve when
they are fully optimized, these works offer a
unique insight into the inner workings of a
machine learning algorithm by turning the
experience of reading and listening into an
encounter with a learning agent. While these
works are certainly different in many respects
from canonical forms of agent-based artworks
(such as those employing situated robotic
systems), it shares with them a unique focus on
using behavior as an artistic form on its own –
in these cases, through experiencing the learning
journey of an artificial deep learning agent.
More research needs to be done to understand
the relationship between the learning curve and
the perception of behaviors, looking at how
changes in the error rate correspond to
observable changes in the agent’s behavior.
Furthermore, while this study is limited to the
specific domain of text generators, future works
should focus on applying the approach to other
domains, such as robotics, sound and images.
Acknowledgements
The author would like to thank the Fonds de
Recherche du Québec – Société et Culture, the
Massachusetts Institute of Technology,
11 https://eringee.net/voice-of-echo-ii-meta-marathondusseldorf
34
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Unrolling the Learning Curve: Aesthetics of Adaptive Behaviors with Deep Recurrent Nets for Text Generation. Sofian
Audry
NVIDIA Corporation, The Trope Tank, Bad
Quarto, and Dr Nick Montfort for their support.
References
1. Karl Sims, “Evolving Virtual Creatures.”
ACM Proceedings of the 21st Annual
Conference on Computer Graphics and
Interactive
Techniques
(1994):
15–22.
https://doi.org/10.1145/192161.192167.
2. Stephen Todd and William Latham,
Evolutionary Art and Computers. (Academic
Press, 1992).
3. Gaëtan Hadjieres, François Pachet, and Frank
Nielsen, “DeepBach: A Steerable Model for
Bach Chorales Generation,” ArXiv:1612.01010
[Cs],
December
3,
2016.
http://arxiv.org/abs/1612.01010.
4. Douglas Eck and Juergen Schmidhuber, “A
First Look at Music Composition Using LSTM
Recurrent Neural Networks.” Technical report”
Manno, Switzerland: Istituto Dalle Molle Di
Studi Sull Intelligenza Artificiale, March 15,
2002.http://dl.acm.org/citation.cfm?id=870511.
5. Leon A. Gatys, Alexander S. Ecker, and
Matthias Bethge. “A Neural Algorithm of
Artistic Style.” ArXiv:1508.06576 [Cs, q-Bio],
August 26, 2015. http://arxiv.org/abs/1508.
06576.
6. Ahmed Elgammal, Bingchen Liu, Mohamed
Elhoseiny, and Marian Mazzone, “CAN:
Creative Adversarial Networks, Generating
‘Art’ by Learning About Styles and Deviating
from Style Norms,” ArXiv:1706.07068 [Cs],
June 21, 2017. http://arxiv.org/abs/1706.07068.
7. Simon Penny, “Agents as Artworks and
Agent Design as Artistic Practice,” in \ Human
Cognition and Social Agent Technology edited
by Kerstin Dautenhahn, Advances in
Consciousness Research, 19 (2000): 395–414.
https://benjamins.com/ catalog /aicr.19. 18pen.
8. Leonel Moura Pereira and Henrique Garcia,
Man + Robots: Symbiotic Art (Villeurbanne:
Institut d’art contemporain, 2004).
9. Arturo Rosenblueth, Norbert Wiener, and
Julian Bigelow, “Behavior, Purpose and
Teleology,” Philosophy of Science 10, no. 1
(1943): 18–24.
10. Maria Fernández, Maria, “‘Life-like’:
Historicizing Process and Responsiveness in
Digital Art,” in The Art of Art History: A Critical
Anthology, edited by Donald Preziosi, New ed.,
(Oxford History of Art. Oxford; New York:
Oxford University Press, 2006), 477-487.
11. Simon Penny, “Embodied Cultural Agents:
At the Intersection of Robotics, Cognitive
Science and Interactive Art,” in AAAI Socially
Intelligent Agents: Papers from the 1997 Fall
Symposium, edited by Kerstin Dautenhahn
(Menlo Park: AAAI Press, 1997), 103–105.
12. Sofian Audry, “Aesthetics of Adaptive
Behaviors in Agent-Based Art,” Proceedings of
the 22nd International Symposium on Electronic
Art, 2–9. Hong Kong, 2016. http://iseaarchives.org/? page_id=36370.
13. Peter A. Cariani, “On the Design of
Devices with Emergent Semantic Functions,”
State University of New York at Binghamton,
1989.
14. Gordon Pask. An Approach to Cybernetics.
(London: Hutchinson, 1968).
15. Emily Brontë. Wuthering Heights, 1996.
http://www.gutenberg.org/ebooks/768.
16. Christopher D. Manning and Hinrich
Schütze, Foundations of Statistical Natural
Language Processing. 1st edition. (Cambridge,
Mass: The MIT Press, 1999).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
35
How does a Machine Judge Photos?
Comparing humans and algorithms
Wasim Ahmad
Syracuse University
wahmad@syr.edu
Abstract
Machine-vision technology has progressed to
the point where it can do much more than just
identify what’s in a photo; it can tell what makes
a photo good or bad. This study investigates how
well the current technology used by a company
recently acquired by computing giant Apple
works by comparing software that uses this
algorithmic approach to judge photos
aesthetically
to
how
professional
photojournalists view the same photos. One-onone interviews revealed that while humans
varied in their responses to a photo, they often
provided more than just surface-level
commentary, adding extra elements related to
context
and
their
experience.
Their
preconceived biases also coloured their aesthetic
evaluation.
Introduction
Machine perception has come a long way.
Computers have advanced from recognizing
simple text, to voice, and now a new frontier:
images. But the pace of image-recognition
technology has not kept up with the easier media
of text and voice. With so many pixels and so
much information to digest, the technology
required for a computer to fully understand the
context and content of a photo is still a long way
off.
Still, the algorithms used today are becoming
ubiquitous. Even services such as the lowly
Flickr can now recognize basic items in photos,
as can Google Photos. Apple’s iPhone has
become very good at recognizing faces and
organizing them into albums.
Few have applied the technology to have it
conjure up more meaning to an image than that.
However, a French company, Regaind, has put
36
its algorithms to use to try to better understand
photos that are being run through its software.
The company’s software was good enough to
catch the attention of Apple, which quietly
purchased the company in September 2017. The
service was shut down during negotiations. [1]
In 2016, Regaind created a public
demonstration of its software, aimed at
photographers who wanted a critique of their
photographs. The program was called Keegan,
the photo coach (https://keegan.regaind.io/).
The underlying premise of this website was to
use the algorithms created by this company for
use in its business dealings to identify objects
and categorize photos for another purpose: to
critique a photo so the photographer could
improve upon it. When a photographer uploaded
a photo to the site, Keegan provided both written
feedback and a numerical dataset, which ranked
the photo according to several different metrics.
An example is shown in Appendix A. The
Keegan website was retired on Feb. 10, 2017,
but Regaind still offered the technology in a
more business-oriented format, without the
qualitative, human-sounding feedback, to
paying customers.
When the Keegan site was launched in 2016,
it made waves in the photo industry. Whereas
previously,
photographers
needed
a
knowledgeable human to obtain a critique of
their photos in words, now a machine could
provide the same service using algorithms. [2]
The software could also output quantitative data
about the photo, opening up a completely
different avenue of study than that presented
here.
This topic is of particular interest because of
the potential seismic shift for a particular genre
of photography: photojournalism. Photo editors
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
How does a Machine Judge Photos?. Wasim Ahmad
at a major event, such as the Olympics, can often
receive upwards of 10 images per second from a
working photographer, and going through them
under ever-tightening deadlines is a difficult
task. [3] If the technology existed to separate the
good photos from the bad, the editors could
work much faster. Of course, there is also a
chance that the editors could be replaced.
The acquisition of Regaind by Apple increases
the salience of this study. It is the most recent,
and possibly the only study that examines the
efficacy of software that may power image
recognition technology on every iPhone and
iPad on the market. The software was shut down
mid-study, when Apple entered into
negotiations with the company’s founders, as
hindsight revealed.
It is not an exaggeration to say that not only
this software, but image recognition technology
in general will shape the future of image editing
across multiple industries, so understanding the
logic and process behind such software is
crucial. Human editors need to make decisions
about photos for public consumption, and users
need to curate their own personal libraries, so
this study attempts to understand how this
artificial intelligence works by examining the
responses to the latest software in the field. It is
through gaining this understanding that the
implications of this technology on the media
industry will be realized.
humans and machines in this area, though there
are studies on human vs. human competitions:
e.g., photos from professional photographers vs.
those from citizen photojournalists. [6] There
has even been a study of professionals vs.
professionals, looking at which newspaper staff
are more professional and whether this
professionalism produced better photography.
[7] This study throws a machine into the mix,
Regaind’s Keegan, comparing its qualitative
responses to photographs to insights from
professional photojournalists obtained through
in-depth interviews. The goal of the research is
to determine how far along image recognition
technology is, and to study whether in its present
state, its perception can rival that of humans in
journalistic fieldwork. The aim of this exercise
is to see if software can achieve even a basic
level of competency in identifying aesthetic
qualities of a
photo
compared
to
photojournalists.
With that in mind, the following research
questions are examined in this study:
Literature Review
Previous research on this topic has mostly come
from the realm of engineering. Some researchers
have placed high consideration not only on how
the aesthetic value of a photo affects machine
perception, but also on how the technical aspects
of a photo, such as compression and noise, affect
a machine’s evaluation of a photo. [4]
Other research has focused on what humans
find memorable in photographs, and not
surprisingly, photographs with human subjects
tend to be more memorable than those without.
Colour and “interestingness” were also factors
affecting a photo’s memorability. [5] If a
machine tracks the same way, it could have farreaching implications for the photo industry.
In the communications realm, there has not
been an analysis of the direct battle between
This approach holds appeal for both the
engineering world and the communications
world, putting to practical use this imagerecognition technology and comparing it to
human capability. Comparing human and
machine results offers researchers an
opportunity to further improve upon imagerecognition technology until there is parity, at
least from an aesthetic perspective. This will
move image recognition to the next frontier of
deciding which photographs are important in
context. This is a skill that for the foreseeable
future will require the hand of skilled human
editors no matter how good the machines get.
RQ1: How close to a human response does a
computer algorithm get when looking at the
aesthetic qualities of a photograph?
RQ2: What contributes to the difference
between a computer’s interpretation of a
photograph and a professional journalist’s?
Method
In this study, five photographs were run through
Keegan, and its qualitative evaluations were
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
37
Part I. Full Papers (peer-reviewed)
recorded. The photos were shot by the
researcher or an associate and were not famous
enough to have been published elsewhere.
Although there are many famous photos that
easily come to mind when considering
photojournalism (many readers may have a
ready image in mind, such as Nick Ut’s Vietnam
War-era “Napalm Girl” photo or Richard
Drew’s “Falling Man” photo from the 9/11
terror attacks on New York), there’s a risk that
the participants in the study would bring their
own preconceived notions of these photos to
their interpretations of the aesthetic qualities. To
avoid this, the photos used were taken by the
researcher so that viewers would not have any
history with them. This is similar to an approach
used in a previous study to prevent prior
memories of photographs from interfering with
the study. [8]
Ten current and former professional
photojournalists and photo editors were chosen
through purposive sampling for one-on-one,
semi-structured in-depth interviews about the
same five photos. They were asked first for their
overall impression, and then asked to comment
on items that Keegan frequently brought up,
including
composition
and
framing,
background, exposure and lighting, colour,
moment, blur, and a numerical rating. The
participants all had a minimum of five years of
experience, ranged in age from 27 to 64, and
comprised five males and five females. For the
in-person interviews, printed photos were used,
and for phone interviews, e-mailed photos were
used. Their interviews were recorded,
transcribed and then inputted into NVivo for
analysis.
First cycle coding was done as magnitude
coding. [9] The criteria outlined in the questions
(composition, background, colour, exposure,
moment and blur) were coded as positive or
negative. Keegan was also included in this
process. Coding the human responses revealed
additional themes that were unexpected, and
pattern coding was used to group these thoughts
together to reveal more information.
unexpected. However, what was unexpected
was that in some cases, there were advantages to
using the machine responses over human
responses, the biggest being consistency.
Results
The human responses differed in several ways
from those of the software. This was not
Misunderstanding images
Misunderstanding was a common theme. Even
at the most basic level, the human participants
38
Evaluating context
Context was one of the most dominant themes to
come up. The participants consistently asked
how a photo was going to be used. By contrast,
a machine such as Keegan has no outward
appearance of caring about context, but that
doesn’t mean that context isn’t coded in. There’s
just no way to tell if Keegan was programmed
by landscape photographers or photojournalists.
For instance, Jason, a photojournalist-turnedstudio photographer, had this to say about an
extensively altered portrait photo of a young
child dressed as Thor, a comic book superhero:
“I like what they did in this photo. I’ve seen
some of this work down in Texas; a couple of
guys used Photoshop, and it had a really nice
effect. It’s cute. It made me laugh. They
definitely caught the moment.”
Contrast that to how Leslie, a former
photojournalist, acknowledges her bias about
the same photo: “I will say [I rate this photo] a
5, because I just hate studio pictures … but that
has nothing to do with it; it’s a great, fun photo
of your child or someone’s child, so that’s good,
and I think that, you know, my bias comes from
being a professional photojournalist. If I were a
portrait photographer, I might give it a 10.”
Keegan’s programming seemed to be keyed in
by portrait and studio photographers, because of
all of the photos in the study, the child Thor was
its favourite. He said this of it: “I’m interested,
and I don’t want to look away; congratulations!
Composed quite well. Very dynamic. Overall,
pretty good shot! 8.7/10; you deserve it, champ.
Everything is so perfectly framed that you get
the framing ribbon! Now you’ve got the idea.
Feel free to send me as many photos as you
want. I’ll be glad to comment on them and give
you my feedback. After 10 pictures, I will
evaluate your level in terms of creativity and
composition.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
How does a Machine Judge Photos?. Wasim Ahmad
could figure out the intent of the photographer
and recognize a distinctive feature, such as a
silhouette, as a photographic choice rather than
a mistake that required studio lighting to fix, as
Keegan suggested. This tied in with experience.
At times, Keegan failed to meet even the level
of expertise of an entry-level photojournalist,
although in some cases, neither provided the
deep level of detail those with more experience
provided. Keegan’s advice for studio lighting
was centred on a photo at a fair of the “Zipper”
ride, where studio lighting wasn’t needed or
practical, and the silhouette was intentional.
None of the photographers in the study made the
same call as Keegan.
Contradictions and bias
The humans participants would sometimes
contradict themselves about how they felt about
an aspect of a photograph or they indicated bias
knowing that a photo was shot with a cell phone.
Professional photojournalists often frown upon
cell phone photos. This was one area in which
Keegan’s objectivity was an advantage. Keegan
did not seem to differentiate or care what device
was used to shoot a photo, and its results were
consistent, as opposed to the human participants,
who often contradicted themselves in the same
sentence.
For
instance,
Jason,
the
photojournalist-turned-studio photographer, had
this to say about the composition of the
Dominican Day Parade photo: “I like its
composition; I think it’s a little loose.” These
two statements don’t make sense in the same
sentence without a contrast word. Keegan
offered no such ambiguity.
The prejudice of professional photographers is
a widely known industry issue. Photographers
often frown upon using anything other than
professional cameras and instantly dismiss what
they consider snapshots with point-and-shoot
cameras or phones. This was also true of the
photojournalists in this study. Jessie, a
photojournalist, had this to say about the photo
of kids in a bounce-house: “This is definitely,
like, a snapshot of ‘Hey look, there’s my son’ or
‘I gotta get a photo of this kid’ type of photo.”
That attitude coloured the rest of her critique of
the photo. When asked to rate the photo, she
wanted to go lower than the scale allowed and
give it a 0. By contrast, since Keegan was
programmed by a company, it tended to be more
tactful. For example, it had this to say about the
same relatively poor photograph: “Nice timing,
but a bit blurry. This pick is just … ok. Don’t
forget about the blur and background. A solid
5.7/10. Not bad, but I’m sure you can do better!”
The human bias was related to experience. In
many cases, the photojournalists in this study
were blunt with their critiques because they
were battle-hardened by field experience. The
more experience a participant had, the more
detailed their critique, with photographers who
also had photo-editing experience providing the
most detailed responses. Keegan, by contrast,
offered mostly surface-level and similar
critiques, likely owing to its limited database of
pre-programmed responses to photos.
Conclusions
The machines aren’t there yet. But it’s not easy
to say why. Some research points to technical
issues with photos. Resolution and compression,
for instance, could put software at a
disadvantage, but the same could be said for
humans. [10] Print quality or monitor quality
was brought up in some cases.
One limitation of the study was the photos
themselves. Regaind shut down Keegan earlier
than promised, so there was no opportunity to
run more journalistic photos through it. The
reason for this mysterious cut-off in
communication became clear when Apple’s
acquisition of the company was reported in the
media. [11] The photos chosen were a more
general set used for exploratory purposes, but
they ended up being the main photos used for the
study. Regardless, the photos provided some
insight into how the program perceives images.
Since this study began, new software and
products have come out that utilize machine
vision. Amazon, for instance, released a device
that takes a photo of users and offers fashion
advice. While the technology has significant
implications for journalism, there’s a wide range
of consumer-based applications to be explored,
an area ripe for future study.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
39
Part I. Full Papers (peer-reviewed)
References
1. Romain Dillet, “Apple quietly acquired
computer vision startup Regaind,” TechCrunch,
September 29, 2017. https://techcrunch.com
/2017/09/29/apple-quietly- acquires-computervision-startup-regaind/.
2. Michael Zhang, “Keegan is an Online A.I.
Photo Coach Who Critiques Your Photos,”
Petapixel, October 8, 2016. https://petapixel.
com/2016/10/08/keegan-online-photo- coachcritiques-photos/.
3. Jack Crager, “How Getty's Olympics Photos
are Shot, Edited, and Sent into the World in Just
Two Minutes,” Popular Photography, August 3,
2016. http://www.popphoto.com /how-olympicimages-reach-your-eyes-in-two-minutes-flat.
4. Xiaoou Tang, Wei Luo, Xiogang Wang,
“Content-based Photo Quality Assessment,”
IEEE Transactions on Multimedia, 15, no. 8
(2013):1930-1943
5. Phillip Isola, Jianxiong Xiao, Devi Parikh,
Antonio Torralba, and Aude Oliva, “What
Makes a Photograph Memorable?,” IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 36, no. 7 (2014): 1469-1482.
6. Tara Buehner Mortensen and Ana
Keshelashvili, “If Everyone with a Camera Can
Do This, Then What? Professional Photojournalists' Sense of Professional Threat in the
Face of Citizen Photojournalism,” Visual
Communication Quarterly, 20, no.3 (2013):
144-158.
7. Thomas Coldwell, “Professionalization and
performance among newspaper photographers,” International
Communication
Gazette, 20, no. 2 (1974) :73-81.
8. Phillip Isola, Jianxiong Xiao, Devi Parikh,
Antonio Torralba, and Aude Oliva; What Makes
a Photograph Memorable?
9. Johnny Saldaña, The coding manual for
qualitative researchers, 3rd Edition (Los
Angeles: SAGE 2016).
10. Xiaoou Tang, Wei Luo, Xiogang Wang,
“Content-based Photo Quality Assessment.”
11. Romain Dillet, “Apple quietly acquired
computer
vision
startup
Regained.”
TechCrunch, September 29th 2017. https://
techcrunch.com/2017/09/29/apple-quietlyacquires-computer-vision-startup-regaind/.
40
Appendix A
Following is a sample of the output obtained
from running a photo through Keegan the photo
coach. As you can see, it offers a few sentences
of critique for each photo inputted by the user,
followed by a detailed analysis of several
attributes of the photo.
Appendix B
These are the five photos used in the study, in
the same order presented to the participants.
The three phone interview participants viewed
these on their computer screens at the highest
resolution available for each photo, depending
on the camera used. The seven in-person
interview participants viewed them as 8.5 x 11”
print-outs on Canon Lustre photo paper, printed
on a pigment ink-based printer, the Canon Pro10.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
How does a Machine Judge Photos?. Wasim Ahmad
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
41
Ornament and Transformation - the Digital Painting of Robert Lettner
at the Interface of Analogue and Algorithmic Art
Harald Kraemer
School of Creative Media, City University of Hong Kong
H.Kraemer@cityu.edu.hk
Abstract
In the late 1960s, there was a revival of
ornamental visual language under the term
'Neue Ornamentik'. Inspired by the Chinese
game Tangram, the idea of "geometrical
ornaments" by Wassily Kandinsky, the writings
of Max Bense, and Josef Frank's design of
wallpaper and fabrics in the late 1960’s,
Austrian artist Robert Lettner (1943–2012)
developed an interest in ornament and
ornamental structure. Since he didn’t understand
ornament as a symmetric repetition of motifs,
but more as a strategy to visualize complex
structures, he first developed an analogue, and
later, together with Walter Worlitschek and
Philipp Stadler, a digital visual language, which
resulted in a series of more than 250 digital
paintings from 1995 to 2012.
Based on the structural-systemic approach of
ornament, there are three principles in the digital
paintings of Robert Lettner: (1) the principle of
the serial sequence, (2) the modular principle,
which supports the idea of replacing single
elements within a closed system, and (3) the
principle of the algorithmic image composition,
in which the computer defines the visual
outcome.
This text provides an introduction to Robert
Lettner and a comprehensive overview of the
results of a research project to build an archive
and database. The project was started in 2013
and completed in 2018. We honour his work as
a lesser-known pioneer of computational art, or
rather algorithmic art. It is my hope that this
essay will establish a basis for future research
that will relate Lettner’s digital paintings to the
works of pioneers of computer art.
42
From Analogue Drawings to …
In the mid-1960s, about half a century after the
end of historicism, engagement with the
ornament experienced a revival, known as 'Neue
Ornamentik' ('New Ornamentation'). Klaus
Hoffmann claimed that this revival "is pointing
out the denunciation of the ornament and
discovers
a
new
consciousness
of
ornamentality." [1]
Vienna-based artist Robert Lettner indulged
himself in the passion for ornament during those
years. He received the British Council
Scholarship for his studies in 1972-73 and
moved to London. During his scholarshipfunded studies, he deepened his interest in
ornament by visiting the Victoria & Albert
Museum to study its rich collection of works by
William Morris and the Arts & Crafts
Movement, and created drawings that show his
engagement with the repetition structure of iron
fences, which he drew in an abstract form as an
array of lines. [2] The drawings he made near
the subway station of Royal Oak in London in
1973 were attempts to combine the experience
with the spatial environment, together with the
extensive character of ornament. [3]
Two other ink drawings from 1972 show
strongly rasterized and extremely fragmented
structures (see Fig. 1 and 2). [4] Upon closer
observation, it becomes clear that Lettner
created his drawing in three stages. After doing
a rough pencil layout he redrew the lines in ink
with a Rotring pen, and then he filled in the gaps,
creating a dense structure. He used this timeconsuming procedure to create drawings of
chestnut blossoms in 1975 and more than 70 ink
drawings of plants from 2008 to 2012. [5] From
research made in the framework of the
exhibition In Dialogue with the Chinese
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
Landscape in 2017 in Hong Kong, we conclude
that Lettner combined two drawing techniques
from the classic Manual of the Mustard Seed
Garden (芥子園畫傳) and adapted them to his ink
drawings. [6][7] Following this manual of
Chinese painting from the early Qing Dynasty,
Lettner first made a pencil sketch and used the
double-line method to outline the shape in ink.
Fig. 1. Raster (grid), 1972, Robert Lettner, pencil, ink on paper,
Robert Lettner Archive, Vienna.
of the magical geometry, which he created
between 1995 and 1998.
When Lettner introduced his sketches to
mathematician Herbert Fleischner, he learned
that his analogue drawings dealt with a
mathematical problem that engaged both
mathematicians and pioneers in computer
graphic development. The problem refers to the
idea of Max Bense, which he elaborated in his
four essays Aesthetica. Bense claimed that it is
possible to calculate the aesthetic value of
information through a mathematical formula.
[9] Since the algorithmic-based software in the
late 1960s was far from offering visually
oriented solutions for complex problems,
computer graphics served a merely decorative
purpose, classified as OpArt, as many results
show. The “artists” of those early years were
often computer scientists, mathematicians or
engineers who were interested in the aesthetic
issues of algorithmic design, such as Frieder
Nake, Georg Nees, Herbert W. Franke, and A.
Michael Noll. [10]
In contrast to many professional artists who
rejected the output of these creative dilettantes
as 'doodles', Lettner was fascinated by the huge
potential of computational power and pursued
the question of how to create art with
information technology devices in the following
decades. Ornament became a strategy for
visualizing complex structures within analogue
and digital systems.
... to Digital Painting
Fig. 2. Raster (grid), 1972, Robert Lettner, pencil, ink on paper,
Robert Lettner Archive, Vienna.
Figure 1 anticipates his motif of the knot, while
Figure 2 anticipates the series of digital Images
Lettner's approach to ornament is complex and
can be divided into three main areas:
1. The principle of serial sequence. This
principle can be found in his works in the series
Das Spiel vom Kommen und Gehen (The game
of come and go, 1976–1990), Die reproduzierte
Reproduktion (The reproduced reproduction,
1989–1992), and Landschaft Bilder Therapie
(Landscape Paintings Therapy, 1982–1990).
2. The modular principle, which supports the
idea of replacing single elements within a closed
system, which Lettner applied in Landschaft
Bilder Therapie (Landscape Paintings Therapy,
1982–1990), and most importantly, DiskettenBilder
(1986–1989),
Figurationen
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
43
Part I. Full Papers (peer-reviewed)
(Configurations,
1991),
Eindeu-tigkeiten
(Unambiguities), Mutationen (1992), Dubliner
Thesen zur Informellen Geometrie (Dublin
thesis on informal geometry, 1992–1994), and
Mein Herbarium (My Herbarium, 1990–1994).
3. The principle of the algorithmic image
composition can be shown in his digital
paintings (1995–2012), e.g. in the series Bilder
zur magischen Geometrie (Images of magical
geometry) in the series of Über die Dialektik des
Fadenscheinigen im Ornament (About the
dialectic of the flimsiness in the ornament) and
in a variety of his Spiegelungen (Reflections)
works. Since Lettner varied his motifs and
compositional elements, the borders of those
three principles are fluid and their content is
strongly connected.
1. The Principle of Serial Sequence
When Lettner started the series Das Spiel vom
Kommen und Gehen (The game of come and go),
he asked himself what to do with the leftover
tapes of the airbrush production. In 1970, he
came up with the idea of sticking acrylic paintpolluted tape into a spiral bound photo book.
This idea resulted in a series of artworks:
Klebestreifen (Tapes) in 1976 and Zeilen (Lines)
in 1978 (Fig. 3). The artist created a small passepartout, which enabled him to select and focus
on certain sections of the images. Some of the
image sections inspired him and led him to a
new series of artworks. While he focussed on
simple and low-saturated visual language in his
paintings in 1976, his 1982 paintings, which
were up to 200x200 cm were much closer to the
idea of his early tape works.
Some of his paintings were given new titles for
the exhibition Philosophie der Landschaft
(Philosophy of Landscape) in 2011, so they are
now named Eine frühe Aufzeichnung des
Messbaren (An early record of the
unmeasurable) or Ungenau aber schön
(Unprecise but beautiful). He also created
simple line-based ink drawings in 1982, which
refer to the principle of the serial sequence and
can be seen as 'audiovisual' drawings in terms of
Farbpartituren (Colour scores) (Fig. 4).
For the exhibition Elements. Austrian
Paintings since 1980, which was held in Dublin
in 1996, Lettner used his original material and
44
Fig. 3. Das Spiel vom Kommen und Gehen (Klebebilder) (The
game of come and go, tape images), 1978–2010, Robert Lettner,
acrylic on tape, Robert Lettner Archive, Vienna.
created an artist’s book, titled Das Spiel vom
Kommen und Gehen (The game of come and go).
The members of the Viennese Low Frequency
Orchestra assigned the tape to a score and create
a performance, which has been displayed
several times as a video installation and as a
concert since 2006.
According to musicologist Stephan Sperlich,
the series Das Spiel vom Kommen und Gehen
"renders readability (and in its consequence
visibility and audibility as well) of an implicit
structure [...]. A readability that can only happen
in the process of creation." [11] In 1990, the
series of Das Spiel vom Kommen und Gehen
(The game of come and go) appeared again. This
time Lettner created them in portrait format with
higher contrast and played with elements of the
Disketten-Bilder series. This is visible in his
Figurationen series (1991–1992), which show
the compositional concept after sequencing
(Fig. 5).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
reproduction becomes an original. For this
purpose, he used motifs from daily events,
which can create different interpretations
through multiple reproductions based on the
intention of the content. One example for this
series of works is N.Y. Times Square 1987
February 22, 5 p.m. (1989), which is a tribute to
Andy Warhol. Lettner took a photograph of the
news ticker at Times Square about Warhol's
death on 22 February 1987, as he was in New
York when Andy Warhol died.
The unusual design for the exhibition
Landschaft Bilder Therapie (Landscape
Paintings Therapy), which was organized by
Lettner in the Krems Minority Church in Krems
in 1988, also relates to the principle of serial
sequences. The exhibition showed 84 artworks
from 1982 to 1988, separated into six groups of
14 paintings of the same format. The sequential
arrangement of the artworks shows the strength
of the variety of motifs. From a distance, the
arrangement of the paintings recreates
ornamental banding.
Fig. 4. Das Spiel vom Kommen und Gehen (Tuschzeichnungen)
(The play of come and go, ink drawings), 1982, Robert Lettner,
ink on paper, Robert Lettner Archive, Vienna.
Fig. 5. Figurationen (Configurations), 1991–1992, Robert
Lettner, acrylic on canvas, Robert Lettner Archive, Vienna.
Between 1989 and 1992 Lettner dealt with the
problem of the “reproduced reproduction” and
was interested in the question of when a
2. The Modular Principle
The artist’s library contains a book about
Tangram, the traditional Chinese puzzle with
seven shapes. [12] Lettner pointed out the
importance of this game "since it creates a
constellation of seemingly incompatible
elements of the same system, all in a playful
way." [13] The hidden principle of modularity
which supports the exchangeability of single
elements within a system, can be applied to
some series of Lettner's work, like the
Figurationen (Configurations) (Fig. 5). This
series contains forms that seem to spring directly
from the Tangram game and at the same time
demonstrate the infinite potential of juxtaposing
forms.
The previously mentioned series Landschaft
Bilder Therapie (1982–1990), which is an
example of the principle of serial sequence, can
be understood as the modular principle as well.
This modularity is even more evident at the
interface of the series Disketten, which includes
Disketten (1986–89), Kosmopolitisch (1989),
and T1 to T4 (1992). [14] Inspired by the
Tangram puzzle, Lettner used simple single
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
45
Part I. Full Papers (peer-reviewed)
shapes for the wall design of a hospital in
Mödling in Lower Austria (1993–1995).
Fig. 6. Drei Eindeutigkeiten des Mathematikers Herbert
Fleischner (Figure 6) / Drei Mutationen des Malers Robert
Lettner (Figure 5), 1992, Robert Lettner, silkscreen on canvas,
Robert Lettner Archive, Vienna.
In Drei Eindeutigkeiten des Mathematikers
Herbert Fleischner und Drei Mutationen des
Malers Robert Lettner (Three uniquenesses of
the mathematician Herbert Fleischner and
Three mutations of the painter Robert Lettner)
Lettner was also influenced by the philosophy
behind Tangram (Fig. 6). But this series of three
pairs of silkscreens was the result of
collaboration
between
Lettner
and
mathematician Herbert Fleischner in 1992. The
starting point of their collaboration was three
graphs of the mathematician that had similar
features and could be combined through
transformation.
Herbert
Fleischner
explained
that
mathematicians "think in abstract cases to
recognize connections between the features of
any object (e.g. the mentioned graphs)" and used
this to visualise his thoughts. Hence, the
mathematician created a new reality and clarity.
Lettner was inspired by this and abstracted the
clarity by comparing the mind-set of the
mathematician with the mind-set of the artist.
The term "Mutations" is of central significance,
since "every single graph can be transformed
into the other two graphs with their specific
features." [15]
3. The Principle of the Algorithmic Image
Composition
Lettner described the technical structure of his
digital paintings as follows: "A hand-drawn
sketch has to be digitally printed in two colours
on large plastic foils and attached to an
aluminum bar." [16] Even though it sounds like
a simplification of the artistic process, it is the
result of a process the artist called "the merger
of organic and inorganic aesthetics. The organic
aesthetic is based on an automated hand drawing
that is digitally edited.” The “digital editing
process", which Lettner calls an "inorganic
process", "creates the final result, which is not
new, but represents something new in the way it
was produced. This understanding of aesthetics
is the result of the merger of two processes;
historically, we are on that point of merging the
46
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
manual and technical worlds. That is the actual
Fig. 7. Die magische Geometrie (Klebebild) (The magical
geometry, tape image), 1981, Robert Lettner, tape with acrylic
on photocopy, Robert Lettner Archive, Vienna.
transition process." [17] The digital paintings
have a playful approach to the laws of
algorithmic ornaments.
The works in the series Bilder zur magischen
Geometrie (Paintings of magical geometry,
1995–1998), which were exhibited in Wiener
Secession in Winter 1998/1999, were inspired
by the series Die magische Geometrie (The
magical geometry), created in 1981 (Fig. 7).
Fig. 8. Bilder zur magischen Geometrie, Serie I/13; Serie I/2
(Paintings of magical geometry), 1996, Robert Lettner,
Plotterprint, Robert Lettner Archive, Vienna.
This work is based on a complex hand-drawn
grid, which contains repetitive forms and a
horizontal and vertical sequence, following the
ABAB-scheme. Lettner copied his own drawing
in 1981 (Fig. 7), decorated photocopies
individually with tape and called his work Die
magische Geometrie. Since the mid 1990s,
printing technology has improved, enabling
Lettner to print large-scale prints on acrylic
plastic.
In the Viennese Secession exhibition, several
variations of Paintings of magical geometry
were shown (Fig. 8). The ornamental structures
he produced in an algorithmic process of data
processing contain clear symmetrical features of
the classic understanding of ornament, as well as
the infinite multiplication of the Celtic
understanding of ornament to create dynamic
structures. [18]
The following series, Über die Dialektik des
Fadenscheinigen im Ornament (About the
dialectic of the flimsiness in the ornament),
produced in Lettner’s collaboration with Walter
Worlitschek in 2000, generates special interest,
since the motif of the knot appears in R.
Lettner’s analogue paintings as well (Fig. 9).
[19]. His paintings in the Knotenbilder (Knot
Paintings) series show floating knots in a
fictional landscape, which creates a connection
between different creation techniques. The
knots of his digital paintings are more highly
saturated, but they relate more to the first
generation of images of the Bilder zur
magischen Geometrie series. Lettner’s knots can
be associated with Arabian inspired ornaments
as well as arabesque.
The arabesque motif was described in 1893 by
Austrian art historian Alois Riegl in his book
Stilfragen. Grundlegungen zu einer Geschichte
der Ornamentik. It is a prototype of ornamental
design. [20] With this tendril from which buds
and flowers sprout in infinite succession, like a
Mandelbrot set, R. Lettner succeeds in
interrupting and reinforcing the symmetrically
arranged grid by means of another variable
element. Arabesques are "the result of a highly
complicated mathematical formula, which, as
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
47
Part I. Full Papers (peer-reviewed)
Muslims feel, indicates the wonderful structure
of the world." [21] In later conversations,
Lettner said that during this time he studied the
repetitive methods of Arabic and Celtic
ornament. He understood his digital painting as
an artistic reaction to the writings of Alois Riegl,
Wilhelm Worringer and Max Bense, but also to
the notion of Benoît B. Mandelbrot's fractal
geometry. [22]
In 2003, R. Lettner started a new collaboration
with Philipp Stadler and created a new series of
works. The Das unsichtbare Archiv des
Arcimboldo (The invisible archive of
Arcimboldo) series was inspired by oriental
carpets. Illustrations of old Viennese cookbooks
were cut out and scanned. This series, along with
Bilder zur magischen Geometrie, is an example
of the "mathematization of the arts", as Max
Bense said, citing the "repetition of one single
element after the laws of symmetry." [23]
Mathematicians Herbert Fleischner and
Fig. 10. Das unsichtbare Archiv des Arcimboldo (The invisible
archive of Arcimboldo), 2003, Robert Lettner and Philipp
Stadler, plotterprint on canvas, Robert Lettner Archive, Vienna.
Fig. 9. Über die Dialektik des Fadenscheinigen im Ornament
(About the dialectic of the flimsiness in ornament), 2000, Robert
Lettner and Walter Worlitschek, inkjet on canvas, Robert Lettner
Archive, Vienna.
48
Christoph Überhuber, and philosophers
Burghart Schmidt and Mara Reissberger, a
specialist in the history of ornament, developed
a huge interest in this new series of works, since
the Information Technology industry uses the
idea of “structural design patterns” as well.
In the same year, 2003, R. Lettner created the
series Mein Uterus verlangt nach deinem
Zungenkuss (My uterus requires your tongue
kiss) (Fig. 11). At first glance, it seems to show
surfaces as they would develop from the process
of marbling paper and invite random
associations à la Rorschach. But through
rotations, multiplications and mirroring, the
program creates tensions between the elements,
while the borders vanish. This work gains its
tension from the co-existence between
symmetrical order and asymmetrical chaos,
which fight for attention. An der Schnittstelle zur
Unendlichkeit (At the interface to infinity, 2009)
is an extended version of this strategy (Fig. 12).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
Fig. 11. Mein Uterus verlangt nach deinem Zungenkuss,
Reflection A4 V1 (My uterus requires your tongue kiss), 2003,
Robert Lettner and Philipp Stadler, plotterprint on canvas,
Robert Lettner Archive, Vienna.
Fig. 12. An der Schnittstelle zur Unendlichkeit (Reflection A67
V4) (At the interface to infinity), 2009, Robert Lettner and
Philipp Stadler, plotterprint on canvas, Robert Lettner Archive,
Vienna.
Though the works already had titles, some of
them were renamed for the exhibition:
Philosophie der Landschaft (Philosophy of
Landscape), Natur ist keine Katastrophe
(Nature is not a catastrophe), Der Wassergarten
im Hause Neptun (The water garden in the
house Neptune), Kalvarienberg – von allen
Seiten kamen sie (Calvary – they came form all
sides),
and
Bikiniatoll
oder
Ein
Kreuzklangsonett (A cross sound sonnet).
Lettner said the titles of his works float and can
change over time, just as the audience will
change over time. Also, he considered the title
of a work a rebus to hide something about the
work rather than giving an actual explanation.
[24]
Spiegelungen (Reflections, 2004–2012) and
Synchronwelten (synchronous worlds, 20102012) are small-scale studies that R. Lettner
created in large number. The scanned versions
are the foundation for digital paintings. As
described in the exhibition catalogue for the
exhibition in Hong Kong, Lettner and Philipp
Stadler had different approaches to finding a
visual language. The work A27 (2005) and the
three versions of it V1, V2 und V3 from the series
Spiegelungen (Reflections) utilize the technique
of zooming and therefore focussing on details.
The microcosm and macrocosm are at the same
level of importance since they complement each
other. They appear as opposite coloured pairs, as
in A25 and A26, or unite and complement each
other, as in A61 to A 63 and A95.
The pointillist work Solaris 1 (Reflection A17
V1) (Fig. 13), A28 and A30, and A38 to A40,
look like colour plays, which visualize the
simultaneous and successive contrasts of the
colour theories of Maurice Chevreul. The
inspiration for the motif were patchwork
patterns and the ornamental visual language of
the Orient.
An interesting exception in the Spiegelungen
(Reflections) series are the works A45 to A47,
which are also named 33 liegt zwischen den
Zahlen (33 lies between the numbers) (Fig. 14).
In these three panels, Lettner skilfully combines
the principle of serial sequences with the
modular principle.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
49
Part I. Full Papers (peer-reviewed)
Fig. 15. Seegras (Seagrass), ca. 1930, designed by Josef Frank
for Haus & Garten, Austria, furnishing fabric of hand blockprinted and glazed cotton, Inv. No. CIRC.830-1967, Victoria &
Albert Museum, London.
Fig. 13. Solaris 1 (Reflection A17 V1), 2005, Robert Lettner and
Philipp Stadler, plotterprint on canvas, Robert Lettner Archive,
Vienna.
Fig. 14. 33 liegt zwischen den Zahlen (Reflections A45, A46,
A47) (33 lies between the numbers), 2007, three parts, Robert
Lettner and Philipp Stadler, plotterprint on canvas, Robert
Lettner Archive, Vienna.
If we examine Spiegelungen (Reflections) in
relationship with the works of other artists, we
see a connection with the fabric and wallpaper
patterns of Viennese architect and designer Josef
Frank (1885–1967), [25] who said about
ornamental design on digital paintings: "A
pattern of organic lines has always the desire to
dissolve the geometrical form which it is
connected with."
In the fabric pattern seagrass (seaweed) (Fig.
15), a block print from the 1930s, the floral
elements of the ornaments are woodcuts, which
are printed on fabric and vary by rotation. This
work, created with stamps or paint rollers, has
simple motifs and complex ornaments created
by multiple rotations. This was a common
design on room walls, in corridors, and in
50
fabric samples in Vienna around 1900. Lettner
and Philipp Stadler used a similar approach in
their work Kalvarienberg – von allen Seiten
kamen sie (Calvary – they came form all sides,
2010) and A38 to A40 (Fig. 16). But they used
algorithms that created reflections and twists.
Since a comparison with the works of Josef
Frank would not be comprehensive enough, I
compare the digital paintings and landscape
paintings of Lettner with those of William
Morris and the Arts & Crafts Movement, and
examine the influence of Josef Hoffmann,
Koloman Moser and the ornamentality of
Wiener Werkstätten. [26]
Echoes of Worringer, Kandinsky, and Bense
It is perhaps surprising that simple drawings and
simple arranged shapes can create complex
ornamental patterns and reveal artistic qualities.
Lettner pointed out that every shape seems to be
familiar, but they actually don’t exist in their
form itself: "It only looks like one. More
accurately, it is the fragment of a structure which
is identifiable as such throughout our
civilization and throughout nature and the
world, but ultimately breaks free if it is
stretched, and passes from being a microcosm to
a macrocosm. Ultimately it becomes infinitely
large, and I experience these intervening spaces.
The structure, the ornament, is no longer
identifiable. But if I move away, the ornament
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
Fig. 16. Kalvarienberg – von allen Seiten kamen sie (Reflection
A10) (Calvary – they came from all sides), 2004, Robert Lettner
and Philipp Stadler, plotterprint on canvas, Robert Lettner
Archive, Vienn
ǤǡǤ̶ሾʹሿ
Ǥ ǡ
ǡ
Ǥ
For art historian Wilhelm Worringer, who
created with his dissertation Abstraktion und
Einfühlung (Abstraction and Empathy) in 1907
one of the theoretical foundations for the
understanding of modern abstract art, these
"abstract legal forms" of ornament are "the only
ones and the ones the highest" and therefore "it
was natural to see in mathematics the highest art
form." [28]
Max Bense, who quoted in his chapter "Die
Mathematik in der Ornamentik" whole passages
from Worringer, declared that it is "irrelevant at
first whether the geometric ornament already
existed as such or if it developed from a plant
ornament." [29] For him, the mathematization of
art has “a morphological purpose; it’s not just
creating certain figures from the material
prescribed for the artistic act that is subject to
mathematics; the composition of artistic details
and artistic elements also fall prey to
mathematization." As an example, Bense calls
the "repetition of an element according to the
laws of symmetry one of the most general and
the oldest processes of mathematization in fine
art." [30]
It wasn’t just Max Bense who took
Worringer's Abstraction and Empathy as
inspiration; artists like Wassily Kandinsky and
Franz Marc saw in this important work the
theoretical basis for their artistic involvement in
abstraction. In order to grasp the gaps, Lettner
approached his works with a vision of a
"geometrical ornament" as it was envisioned by
Wassily Kandinsky in 1911: "If we start to
destroy our connection to nature, enforce
liberation by all means, and remain satisfied
with a combination of pure colour and
independent shapes, we will create art that looks
like geometrical ornaments that will look like a
tie or a carpet." [31] It is surprising, that the
works of digital painting with its visual language
come so close to Kandinsky’s vision of a carpet.
But Lettner was more engaged in lines and
forms that can be ordered as structures, embody
ornament, and lead to an ornamental
consciousness. [32]
Lettner continued developing the question of
ornament with help of digital computing
technology for scientific research purposes and
included the art discussion, since just as the
evolution of language affects society, the
evolution of ornament affects the artistic system.
[33] The calculability of the algorithm leads to
unpredictable virtual space of experience since
"it cannot be found more magical than in the
order." [34]
Acknowledgments
This essay is dedicated to Herbert Fleischner in
honor of his 75th birthday. I would like to thank
Margit Lettner, Markus Lettner, and Philipp
Stadler, and especially Park Ji Yun Jade,
Alexandra Woermann and Tobias Klein for their
help.
References
1. Klaus Hoffmann, Neue Ornamentik. Die
ornamentale Kunst im 20. Jahrhundert
(Cologne: DuMont, 1970).
2. Jorge Enciso, Design Motifs of Ancient
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
51
Part I. Full Papers (peer-reviewed)
Mexico (New York: Dover Publications, 1953);
Claude Humbert, Ornamental Design (Fribourg:
Office du Livre, 1970); Jules Bourgoin, Arabic
Geometrical Pattern & Design (New York:
Dover Publications, 1973); Carol Belanger
Grafton; Traditional Patchwork Patterns (New
York: Dover Publications, 1974).
3. Illustrations in: Harald Kraemer, Robert
Lettner. Das Spiel vom Kommen und Gehen.
Widerstand – Utopie – Landschaft – Ornament
(Klagenfurt: Ritter Verlag, 2018), 131.
4. Harald Kraemer, Robert Lettner, 133.
5. Harald Kraemer, Robert Lettner, 130.
6. Robert Lettner. In Dialogue with the
Chinese Landscape – Utopia of Ornaments –
New Wunderkammer of Rococo, ed. Florian
Knothe and Harald Kraemer, exhibition
catalogue (Hong Kong: University Museum and
Art Gallery, The University of Hong Kong,
2017), 10–11; illlustrations 29–35.
7. Der Senfkorngarten. Lehrbuch der
chinesischen Malerei, 2 Vols., ed. Hans Daucher
(Ravensburg: Otto Maier, 1987, Vol. 2), 21;
illustrations see 60–61.
8. Max Bense, Aesthetica (I). Metaphysische
Beobachtungen am Schönen (Stuttgart:
Deutsche Verlags-Anstalt, 1954); Aesthetica
(II). Aesthetische Information (Agis, BadenBaden: Agis, 1956); Aesthetica (III). Ästhetik
und Zivilisation. Theorie der ästhetischen
Zivilisation
(Krefeld/Baden-Baden:
Agis,
1958); Aesthetica (IV). Programmierung des
Schönen.
Allgemeine
Texttheorie
und
Textästhetik (Krefeld/Baden-Baden: Agis,
1960)
9. Grant Taylor: "Routing Mondrian: The A.
Michael Noll Experiment," in Journal of the
New Media Caucus 8, no. 2 (2012), accessed
August 28, 2018, http://median.newmedia
caucus .org/routing-mondrian-the-a-michaelnoll-experiment/
10. Cybernetic Serendipity. The Computer and
the Arts, ed. Jasia Reichhardt, exhibition
cataloge (Institute of Contemporary Art,
London 1968, Studio International Special
Issue, Praeger, 1970); Günter Pfeiffer, Kunst
und Kommunikation. Grundlegung einer
kybernetischen Ästhetik (Cologne: DuMont,
1972); Herbert W. Franke and Gottfried Jäger,
52
Apparative Kunst. Vom Kaleidoskop zum
Computer (Cologne: DuMont, 1973); Frühe
Computergraphik bis 1979. Die Sammlungen
Franke und weitere Stiftungen in der Kunsthalle
Bremen, ed. Wulf Herzogenrath and Barbara
Nierhoff-Wielk,
(Munich:
Deutscher
Kunstverlag, 2007).
11. Stephan Sperlich, "Das Spiel vom Kommen
und Gehen," in Low Frequency Orchestra plays
Robert Lettner: Das Spiel vom Kommen und
Gehen, Wien, 2006. Reprinted in Harald
Kraemer, Robert Lettner, 262–263.
12. Joost Elffers, Tangram. Das alte
chinesische Formenspiel (Cologne: DuMont,
1978).
13. Robert Lettner in conversation with the
author on 27.06.2012.
14. Konrad Paul Liessmann, "Zu Robert
Lettners Diskettenbilder," in Robert Lettner.
Dubliner Thesen zur Informellen Geometrie,
Exhibition catalogue (Galerie Heiligenkreuzerhof, Wien, 1994). Reprinted in Harald
Kraemer, Robert Lettner, 241.
15. Herbert Fleischner & Robert Lettner,
"Mathematik in der Kunst Oder Kunst in der
Mathematik?" Reprinted in Harald Kraemer,
Robert Lettner, 231.
16. Robert Lettner: Letter to Purchase
Commission of Cultural Department, Lower
Austria Provincial Government on 07.03.1997.
17. Robert Lettner and Harald Kraemer, "Art is
Reedeemed, Mystery is Gone. Conversations
with Robert Lettner and Harald Kraemer," in
Robert Lettner, Die Kunst ist erlöst, das Rätsel
ist zu Ende. Bilder zur magischen Geometrie, ed.
Wiener Secession, exhibition catalogue
(Vienna: Wiener Secession, 1998), 15–23.
Reprinted in: Florian Knothe and Harald
Kraemer, Robert Lettner, 39–45.
18. Harald Kraemer, "Ornamentik zwischen
Opulenz
und
Virtualität:
Worringers
Vermächtnis?" in Hundert Jahre 'Abstraktion
und Einfühlung.' Konstellationen um Wilhelm
Worringer, ed. Norberto Gramaccini and
Johannes Rössler (Munich: Wilhelm Fink,
2012), 259–276, 271.
19. Harald Kraemer, Robert Lettner, Mara
Reissberger and Burghart Schmidt, Im Bild über
Bilder sprechen. Über die Dialektik des
Fadenscheinigen im Ornament (Vienna: Verlag
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
der Universität für angewandte Kunst Wien,
2006).
20. Chapter IV "Die Arabeske," in Alois Riegl
Stilfragen. Grundlegung zu einer Geschichte
der Ornamentik (Berlin: Verlag von Georg
Siemens, 1893), 259–346.
21. Annemarie Schimmel, "Die Arabeske und
das islamische Weltgefühl" in Ornament und
Abstraktion – Kunst der Kulturen, Moderne und
Gegenwart im Dialog, ed. Markus Brüderlin,
Fondation Beyeler Riehen/Basel, exhibition
catalogue (Köln: DuMont, 2001), 31–35, see 31.
22. Robert Lettner in conversation with the
author on 28.06.2012.
23. Max Bense, "Die Mathematik in der
Ornamentik," in Konturen einer Geistesgeschichte der Mathematik II. Die Mathematik in
der Kunst (Hamburg, 1949), 57–77, see 57.
24. Robert Lettner in conversation with the
author on 28.06.2012.
25. Josef Frank 1885 – 1967, exhibition
catalogue, (Vienna: Hochschule für angewandte
Kunst, 1981); Josef Frank. Stoffe Tapeten
Teppiche, exhibition catalogue (Vienna:
Hochschule für angewandte Kunst, 1986), 62,
see also 28, illustrations 25–28.
26. Linda Parry, William Morris. Textiles, ed.
Victoria & Albert Museum, London (V&A
Publishing, 1983, Reprint 2013); Linda Parry,
Textiles from the Arts and Crafts Movement
(London: Thames and Hudson, 2005); Josef
Hoffmann. Ornament zwischen Verbrechen und
Hoffnung, exhibition catalogue (Vienna:
Museum für angewandte Kunst, Wien, 1987);
Angela Völker, Die Stoffe der Wiener
Werkstätte 1910–1932, ed. MAK Wien (Vienna:
Brandstätter Verlag, 1990/2004).
27. Robert Lettner, Die Kunst ist erlöst, 16.
Reprinted in Florian Knothe and Harald
Kraemer, Robert Lettner, 40.
28. Wilhelm Worringer, Abstraktion und
Einfühlung, Ein Beitrag zur Stilpsychologie
(Neuwied: Heuer'sche Verlags-Druckerei,
1907), at the same time Dissertation, Faculty of
Philology, University Bern, 12.1.1907. Reprint:
Munich: Fink, 2007, Vol. 1, 39–139, see 76. On
the input of Worringer: Hundert Jahre
'Abstraktion und Einfühlung.' Konstellationen
um Wilhelm Worringer, ed. Norberto
Gramaccini and Johannes Rössler (Munich:
Wilhelm Fink, 2012).
29.
Max
Bense:
Konturen
einer
Geistesgeschichte der Mathematik II. Die
Mathematik in der Kunst, (Hamburg: Claassen
& Goverts, 1949). See chapter "Die Mathematik
in der Ornamentik", 57–77; about Worringer 5961.
30. Max Bense, Konturen, 59, 57.
31. Wassily Kandinsky Über das Geistige in der
Kunst (München, 1912), (10. edition, Bern:
Benteli, 1973), 115. See also Harald Kraemer,
"Ornamentik zwischen Opulenz und Virtualität:
Worringers Vermächtnis?" in Norberto
Gramaccini and Johannes Rössler, Hundert
Jahre 'Abstraktion und Einfühlung', 273.
32. Robert Lettner. Vienna Secession, 16.
Reprinted in Florian Knothe and Harald
Kraemer, Robert Lettner, 40.
33. Niklas Luhmann, Die Kunst der Gesellschaft
(Frankfurt/Main: Suhrkamp, 1995), 349.
34. Robert Lettner, Die Kunst ist erlöst, 16.
Reprinted in Florian Knothe and Harald
Kraemer, Robert Lettner, 40.
Bibliography
Belanger Grafton, Carol. Traditional Patchwork
Patterns, New York: Dover Publications,
1974.
Bense, Max. Konturen einer Geistesgeschichte
der Mathematik II. Die Mathematik in der
Kunst, Hamburg: Claassen & Goverts, 1949.
Bense, Max. Aesthetica (I). Metaphysische
Beobachtungen am Schönen, Stuttgart:
Deutsche Verlags-Anstalt, 1954.
Bense, Max. Aesthetica (II). Aesthetische
Information, Agis, Baden-Baden: Agis,
1956.
Bense, Max. Aesthetica (III). Ästhetik und
Zivilisation. Theorie der ästhetischen
Zivilisation, Krefeld/Baden-Baden: Agis,
1958.
Bense, Max. Aesthetica (IV). Programmierung
des Schönen. Allgemeine Texttheorie und
Textästhetik, Krefeld/Baden-Baden: Agis,
1960.
Bourgoin, Jules. Arabic Geometrical Pattern &
Design, New York: Dover Publications,
1973.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
53
Part I. Full Papers (peer-reviewed)
Cybernetic Serendipity. The Computer and the
Arts, edited by Jasia Reichhardt, Jasia.
Institute of Contemporary Art, London 1968.
Exhibition catalogue. Studio International
Special Issue, Praeger, 1970.
Daucher, Hans (Ed.). Der Senfkorngarten.
Lehrbuch der chinesischen Malerei, 2 Vols.,
Ravensburg: Otto Maier, 1987.
Elffers, Joost. Tangram. Das alte chinesische
Formenspiel, Cologne: DuMont, 1978.
Enciso, Jorge. Design Motifs of Ancient Mexico,
New York: Dover Publications, 1953.
Fleischner, Herbert and Robert Lettner.
"Mathematik in der Kunst Oder Kunst in der
Mathematik?", In Harald Kraemer: Robert
Lettner. Das Spiel vom Kommen und Gehen.
Widerstand – Utopie – Landschaft –
Ornament, Klagenfurt: Ritter Verlag, 2018,
231.
Frank, Josef. 1885–1967. Hochschule für
angewandte Kunst Wien. Exhibition
catalogue.
Vienna:
Hochschule
für
angewandte Kunst, 1981.
Frank, Josef. Stoffe Tapeten Teppiche.
Hochschule für angewandte Kunst Wien.
Exhibition catalogue. Vienna: Hochschule
für angewandte Kunst, 1986.
Franke, Herbert W. and Gottfried Jäger.
Apparative Kunst. Vom Kaleidoskop zum
Computer, Cologne: DuMont, 1973.
Frühe Computergraphik bis 1979. Die
Sammlungen Franke und weitere Stiftungen
in der Kunsthalle Bremen, edited by
Herzogenrath, Wulf and Barbara NierhoffWielk, Kunsthalle Bremen. Exhibition
catalogue. Munich: Deutscher Kunstverlag,
2007.
Grant, Taylor. "Routing Mondrian: The A.
Michael Noll Experiment." In Journal of the
New Media Caucus, Fall 2012, V.08, No. 02,
Accessed
August
28,
2018.
http://median.newmediacaucus.org/routingmondrian-the-a-michael-noll-experiment/
Hoffman,
Josef.
Ornament
zwischen
Verbrechen und Hoffnung. Museum für
angewandte Kunst Wien. Exhibition
catalogue. Vienna: Museum für angewandte
Kunst, Wien, 1987.
Hoffmann, Klaus. Neue Ornamentik. Die
ornamentale Kunst im 20. Jahrhundert.
54
Cologne: DuMont, 1970.
Humbert, Claude. Ornamental Design.
Fribourg: Office du Livre, 1970.
Hundert, Jahre. 'Abstraktion und Einfühlung.'
Konstellationen um Wilhelm Worringer,
edited by Norberto Gramaccini and Johannes
Rössler. Munich: Wilhelm Fink, 2012.
Kandinsky, Wassily. Über das Geistige in der
Kunst. [Munich, 1912], 10th edition, Bern:
Benteli, 1973.
Kraemer, Harald. Robert Lettner. Das Spiel vom
Kommen und Gehen. Widerstand – Utopie –
Landschaft – Ornament. Klagenfurt: Ritter
Verlag, 2018.
Kraemer, Harald. "Ornamentik zwischen
Opulenz und Virtualität: Worringers
Vermächtnis?"
In
Hundert
Jahre
'Abstraktion
und
Einfühlung.'
Konstellationen um Wilhelm Worringer,
edited by Norberto Gramaccini and Johannes
Rössler. Munich: Wilhelm Fink, 2012, pp.
259–276.
Kraemer, Harald, Robert Lettner, Mara
Reissberger and Burghart Schmidt: Im Bild
über Bilder sprechen. Über die Dialektik des
Fadenscheinigen im Ornament. Vienna:
Universität für angewandte Kunst, 2006.
Lettner, Robert. In Dialogue with the Chinese
Landscape – Utopia of Ornaments – New
Wunderkammer of Rococo, edited by Florian
Knothe and Harald Kraemer. University of
Hong Kong Museum and Art Gallery (26.04.
– 18.06.2017); School of Creative Media,
City University of Hong Kong (25.03. –
19.04.2017; 25.03. – 03.04.2017), Exhibition
catalogue. Hong Kong: University Museum
and Art Gallery, The University of Hong
Kong, 2017.
Lettner, Robert and Harald Kraemer, "Art is
Reedeemed, Mystery is Gone. Conversations
with Robert Lettner and Harald Kraemer." In
Robert Lettner: Die Kunst ist erlöst, das
Rätsel ist zu Ende. Bilder zur magischen
Geometrie, edited by Wiener Secession.
Exhibition
catalogue
20.11.1998
–
17.01.1999. Vienna: Wiener Secession,
1998, 15–23. Reprinted in Robert Lettner. In
Dialogue with the Chinese Landscape –
Utopia of Ornaments – New Wunderkammer
of Rococo, edited by Florian Knothe and
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Ornament and Transformation – the Digital Painting of Robert Lettner at the Interface of Analogue and Algorithmic
Art. Harald Kraemer
Harald Kraemer. University Museum and Art
Gallery, Exhibition catalogue, Hong Kong:
University Museum and Art Gallery, The
University of Hong Kong, 2017, 39–45.
Liessmann, Konrad Paul. "Zu Robert Lettners
Diskettenbilder (1994)." In Robert Lettner.
Dubliner Thesen zur Informellen Geometrie.
Galerie Heiligenkreuzerhof, Vienna, 1994.
Exhibition catalogue. Reprinted in Harald
Kraemer. Robert Lettner. Das Spiel vom
Kommen und Gehen. Widerstand – Utopie –
Landschaft –Ornament, Klagenfurt: Ritter
Verlag, 2018, 241.
Luhmann, Niklas. Die Kunst der Gesellschaft.
Frankfurt/Main: Suhrkamp, 1995.
Ornament und Abstraktion – Kunst der Kulturen,
Moderne und Gegenwart im Dialog, edited
by Markus Brüderlin. Fondation Beyeler
Riehen/Basel. Exhibition catalogue 10.6. –
7.10.2001. Köln: DuMont, 2001.
Parry, Linda. William Morris. Textiles, edited by
Victoria & Albert Museum. London: V&A
Publishing, 1983. Reprint 2013.
Parry, Linda. Textiles from the Arts and Crafts
Movement. London: Thames and Hudson,
2005.
Pfeiffer, Günter. Kunst und Kommunikation.
Grundlegung einer kybernetischen Ästhetik.
Cologne: DuMont, 1972.
Riegl, Alois. Stilfragen. Grundlegung zu einer
Geschichte der Ornamentik. Berlin: Verlag
von Georg Siemens, 1893.
Schimmel, Annemarie. "Die Arabeske und das
islamische Weltgefühl (2001)." In Ornament
und Abstraktion – Kunst der Kulturen,
Moderne und Gegenwart im Dialog, edited
by Markus Brüderlin. Fondation Beyeler
Riehen/Basel. Exhibition catalogue 10.6. –
7.10.2001. Köln: DuMont, 2001, 31–35.
Sperlich, Stephan Sperlich: "Das Spiel vom
Kommen und Gehen", in: Low Frequency
Orchestra plays Robert Lettner: Das Spiel
vom Kommen und Gehen, Wien, 2006.
Reprinted in Harald Kraemer. Robert
Lettner. Das Spiel vom Kommen und Gehen.
Widerstand – Utopie – Landschaft –
Ornament, Klagenfurt: Ritter Verlag, 2018,
262–263.
Völker, Angela. Die Stoffe der Wiener
Werkstätte 1910 – 1932, edited by MAK
Wien,
Vienna:
Brandstätter
Verlag,
1990/2004.
Worringer,
Wilhelm.
Abstraktion
und
Einfühlung, Ein Beitrag zur Stilpsychologie.
Neuwied: Heuer'sche Verlags-Druckerei,
1907, at the same time Dissertation, Faculty
of Philology, University Bern, 12.1.1907.
Reprint: Munich: Fink, 2007, Vol. 1, 39–139.
Illustrations
Fig. 1. Raster (grid), 1972, Robert Lettner,
pencil, ink on paper, H 21 x W 29,6 cm,
Robert Lettner Archive, Vienna. Ill. in
H. Kraemer, Robert Lettner, 2018, 133.
Fig. 2. Raster (grid), 1972, Robert Lettner,
pencil, ink on paper, H 20 x W 16,2 cm,
Robert Lettner Archive, Vienna. Ill. in
H. Kraemer, Robert Lettner, 2018, 133.
Fig. 3. Das Spiel vom Kommen und Gehen
(Klebebilder) (The play of come and go,
tape images), 1978–2010, Robert
Lettner, acrylic on tape, H 29,7 x W 21
cm Robert Lettner Archive, Vienna. Ill.
in H. Kraemer, Robert Lettner, 2018,
116.
Fig. 4. Das Spiel vom Kommen und Gehen
(Tuschezeichnungen), (The play of
come and go, ink drawings), 1982,
Robert Lettner, ink on paper, Robert
Lettner Archive, Vienna. Ill. in H.
Kraemer, Robert Lettner, 2018, 189.
Fig. 5. Figurationen (Configurations), 1991–
1992, Robert Lettner, acryl on canvas,
H 200 x W 100 cm, Robert Lettner
Archive, Vienna. Ill. in H. Kraemer,
Robert Lettner, 2018, 124.
Fig. 6. Drei
Eindeutigkeiten
des
Mathematikers Herbert Fleischner
(Figure 6) (Three uniquenesses of the
mathematician Herbert Fleischner,
Figure 6) / Drei Mutationen des Malers
Robert Lettner (Figure 5) (Three
mutations of the painter Robert Lettner,
Figure 5), 1992, Robert Lettner,
silkscreen on canvas, H 119 x W 84 cm,
Robert Lettner Archive, Vienna. Ill. in
H. Kraemer, Robert Lettner, 2018, 125.
Fig. 7. Die magische Geometrie (Klebebild),
(The magical geometry, tape image),
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
55
Part I. Full Papers (peer-reviewed)
1981, Robert Lettner, Klebestreifen mit
Acryl auf Fotokopie, H 35 x W 50 cm,
Robert Lettner Archive, Vienna. Ill. in
H. Kraemer, Robert Lettner, 2018, 111.
Fig. 8. Bilder zur magischen Geometrie, Serie
I/13; Serie I/2, (Paintings of magical
geometry), 1996, Robert Lettner,
plotterprint on canvas, H 130 x W 150
cm; H 130 x W 180 cm, Robert Lettner
Archive, Vienna. Ill. in H. Kraemer,
Robert Lettner, 2018, 252; 135.
Fig. 9. Über
die
Dialektik
des
Fadenscheinigen im Ornament (About
the dialectic of the flimsiness in the
ornament), 2000, Robert Lettner and
Walter Worlitschek, inkjet on canvas, H
200 x W 140 cm, Robert Lettner
Archive, Vienna. Ill. in H. Kraemer,
Robert Lettner, 2018, 140.
Fig. 10. Das unsichtbare Archiv des Arcimboldo
(The invisible archive of Arcimboldo),
2003, Robert Lettner and Philipp
Stadler, plotterprint on canvas, H 200 x
W 140 cm, Robert Lettner Archive,
Vienna. Printed in H. Kraemer, Robert
Lettner, 2018, 141.
Fig. 11. Mein Uterus verlangt nach deinem
Zungenkuss, Reflection A4 V1 (My
uterus requires your tongue kiss), 2003,
Robert Lettner and Philipp Stadler,
plotterprint on canvas, H 200 x W 200
cm, Robert Lettner Archive, Vienna. Ill.
in H. Kraemer, Robert Lettner, 2018,
142.
Fig. 12. An der Schnittstelle zur Unendlichkeit
(Reflection A67 V4), (At the interface to
infinity), 2009, Robert Lettner and
Philipp Stadler, plotterprint on canvas,
Robert Lettner Archive, Vienna. Ill. in
H. Kraemer, Robert Lettner, 2018, 149.
Fig. 13. Solaris 1 (Reflection A17 V1), 2005,
Robert Lettner and Philipp Stadler,
plotterprint on canvas, H 200 x W 200
cm, Robert Lettner Archive, Vienna. Ill.
in H. Kraemer, Robert Lettner, 2018,
151.
Fig. 14. 33 liegt zwischen den Zahlen
(Reflections A45, A46, A47), (33 lies
between the numbers), 2007, three
parts, Robert Lettner and Philipp
56
Stadler, plotterprint on canvas, H 200 x
W 420 cm, Robert Lettner Archive,
Vienna. Ill. in A. Jankowski, R. Lettner
and B. Schmidt, Philosophie der
Landschaft, 2011, 202–203.
Fig. 15. Seegras (Seagrass), ca. 1930, designed
by Josef Frank for Haus & Garten,
Austria, furnishing fabric of hand
block-printed and glazed cotton, Inv.
No. CIRC.830-1967, Victoria & Albert
Museum, London.
<http://collections.vam.ac.uk/item/O26
7089/seegras-furnishing-fabric-frankjosef/>
Fig. 16. Kalvarienberg von allen Seiten kamen
sie (Reflection A10), (Calvary – they
came from all sides), 2004, Robert
Lettner and Philipp Stadler, plotterprint
on canvas, H 200 x W 200 cm, Robert
Lettner Archive, Vienna. Ill. in A.
Jankowski, R. Lettner and B. Schmidt,
Philosophie der Landschaft, 2011, 184.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Part II
Scholarly Abstracts
57
The Present Tense of Virtual Space
Dr Andrew Burrell
Faculty of Design, Architecture and Building
University of Technology Sydney
andrew.burrell@uts.edu.au
Abstract
This paper presents my ongoing investigation
into narrative spaces in immersive virtual
environments. It focuses on two recent projects,
“p<AR>k*land*” and “loft,” but also uses other
examples from over twenty years of practicebased research utilising virtual environments to
tell spatial stories. I develop an argument that
our understanding of virtual space exists as an
extension of physical space, rather than an
adjunct to it. Using Robert Morris’s seminal
text, “The Present Text of Space” as a starting
point, I explore the role of memory and
imagination in our understanding of, and in
relation
to,
virtual
environments
as
phenomenologically real spaces. This leads into
an exploration of classical mnemonic spaces, as
virtual
environments,
to
support
an
understanding of the functionality of some of the
spatial affordances of virtual environments.
Fig 1. p<AR>k*land*, 2017, Andrew Burrell and Nori Beppu,
augmented reality installation.
“p<AR>k*land*” is a playful interactive
augmented reality experience that presents a
virtual parkland that comes to life before the
58
viewer’s eyes. The audience can interact with a
menagerie of creatures as they help to create the
augmented environment these creatures inhabit.
“p<AR>k*land*” is designed for a wide
audience, but targets children. It was created by
ab:nb the collaborative duo of Andrew Burrell
and Nori Beppu.
Fig 2. Loft, 2017, Andrew Burrell, interactive webVR project.
“Loft” is a webVR narrative experience. It
consists of a self-contained environment that
plays out for the viewer based on its own logic.
With limited agency granted them, the viewer’s
role will initially feel like one of pure
observation, but as the world unfolds around
them, they will find that their point of view, and
how they choose to navigate the space, will
make critical differences to how they experience
the narrative and logic of this world. ‘Loft’
premiered
in
the 2017
ACM
SIGGRAPH Digital Art Community WebVR
Exhibition.
What these two projects have in common is
that they are part of an ongoing investigation
into the use of immersive virtual environments
(regardless of the technology used to access
them) to bring the user into a narrative space
designed specifically for the affordances of
these environments. In many ways, these
projects are developed in spite of, rather than
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Present Tense of Virtual Space. Andrew Burrell
because of, the emergence of consumer grade
virtual and augmented reality headsets and are
informed by a much longer history of working
and creating in virtual environments – a practice
originally informed by installation art practice.
Both of these projects, and the others I will
discuss, are generative in nature, and the
generative systems behind them influence and
build the narratives created with the additional
input of the viewer. “p<AR>k*land*” is built
upon a combinatorial framework of characters
and props brought together by the viewer as
augmentations on the screen in front of them,
while the virtual environment of “loft” is built in
real time as the viewer literally floats in space
building a narrative from the fragments
generating around them.
These examples form a framework to support
the argument that central to the experiential
nature of the resulting virtual environments, is a
reversal of the logic of Morris’s original notion
that “real space is not experienced except in real
time” through an understanding that immersive
virtual space, experienced in real time, becomes
real via the resultant phenomenological
experience of “the present tense of virtual
space.” [2]
References
1. Robert Morris, “The Present Tense of
Space,” Art in America, January-February,
(1978): 70 – 81.
2. Robert Morris, “The Present Tense of
Space,” 70.
Biography
Andrew Burrell is a practice-based researcher
and educator exploring virtual and digitally
mediated environments as a site for the
construction, experience and exploration of
memory as narrative.
His ongoing research investigates the
relationship between imagined and remembered
narrative and how the multi-layered biological
and technological encoding of human
subjectivity may be portrayed within, and
inform the design of, virtual and augmented
environments. He is a lecturer in Visual
Communication at the University of Technology
Sydney.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
59
Computational Photography
LIM, Yeon-Kyoung
School of Creative Media, City University of Hong Kong
yklim3-c@my.cityu.edu.hk
Abstract
Artist Sascha Pohflepp’s Buttons is about
speculative photography working without a
lens. In this work, a smartphone camera shows
us a photograph taken at the same moment the
shutter button is pressed, but it is in fact mined
from the image-sharing site. [1] Media artist
Hito Steyerl once stated that she had met a
developer who had been developing a
smartphone camera technology which leads us
to “create” a photograph based on the stored data
of photo galleries and Social Networking
Services. [2] These examples imply
“computational photography” which does not
focus as much on representing the presence of a
subject in front of a camera, it focuses more on
performances of networked objects, anticipating
what a photographer-user might like to see.
Computational photography transforms a
photograph into a new photographic image
based upon the stored database which is made of
agents’ collective choices and their memory.
Thus it is related to the theory of time with the
focus of the externalization of memory with aids
of technical things. [3][4] In this study, I will
attend
to
computational
photography,
questioning how memory is not stored in
individual consciousness but rather it coexternalizes
along
with
the
braided
collaboration between humans and technical
things.
Fig 1. Buttons, 2006-2010, Sascha Pohflepp, electronics and
smart-phone app, © Sascha Pohflepp
References
1. Sarah Cook, “Stop, Drop, and Roll with it:
Curating Participatory Media Art” ed.,
Bianchini, Samuel, and Erik Verhagen.
Practicable: From Participation to Interaction
in Contemporary Art. (London: MIT Press,
2016), 389-390.
2. Hito Steyerl, “Politics of PostRepresentation,” DIS, http://dismagazine.com/
disillusioned-2/62143/hito-steyerl-politics-ofpost-representation, accessed July 15, 2018.
3. Bernard Stiegler, “The Industrialization of
Memory,” Technics and Time II: Disorientation
(California: Stanford University Press, 1998),
98.
4. Ben Roberts, “Cinema as mnemotechnics:
Bernard Stiegler and the ‘industrialization of
memory’,” Angelaki: Journal of Theoretical
Humanities 11, no. 1 (2006), 55-63.
Biography
Yeon-Kyoung LIM is a PhD candidate in the
School of Creative Media, City University of
Hong Kong. Her research lies at the intersection
of Digital Humanities, Media Art, Affect theory,
and Gender studies. Yeon-Kyoung’s research
60
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Computational Photography. Yeon-Kyoung Lim
uses digital ethnography working on HumanMachine intimacy. Her study aims to explore
digital art/culture in that human beings view
digital applications as intimate companions, and
its impact focusing on a sense of intimacy.
2
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
61
import <execute> [as <command>]
Korsten & De Jong
ArtEZ Institute of the Arts
korstendejong@hotmail.com
“The
multitude
is
biopolitical
organization.” ˗ Hardt and Negri
self-
Abstract
LeWitt has stated that “[t]he idea becomes a
machine that makes the art.” [1] In his
estimation, conceptual art is nothing but a type
of code for art making. LeWitt’s art is an
algorithmic process. Hayles has also reflected
on the multidimensionality of digital signs. [2]
Her term “flickering signifiers” shows that
digital images are the visible manifestations of
underlayers of code often hidden. In Protocol
Galloway has claimed that “[c]ode is the only
language that is executable.” [3] As Artistic
Research duo Korsten & De Jong are interested
in this exact performativity of code and in how
they can position code in such a way that it
informs theoretical concepts in the act of
making. In their Paper-Performance they will
operate on Critical Engineering Manifesto’s
seventh command, reading “7. The Critical
Engineer observes the space between the
production and consumption of technology.
Acting rapidly to changes in this space, the
Critical Engineer serves to expose moments of
imbalance and deception.” [4]
In their working together as a duo they bring to
the table notions evolving around ‘Toyotism.’
As Galloway has pointed out, Toyotism
originates in Japanese automotive production
facilities. “Within Toyotism, small pods of
workers mass together to solve a specific
problem. The pods are not linear and fixed like
the more traditional assembly line, but rather
they are flexible and reconfigurable depending
on whatever problem might be posed to them.”
[5] As Sterling puts it “ad-hocracy” would rule,
with groups of people spontaneously knitting
together across organizational lines, tackling the
62
problem at hand, applying intense computer
aided expertise to it, and then vanishing whence
they came.” [6] It leads Brand to invert Marx
and Engel’s Communist Manifesto message of
resistance-to-unity into “Workers of the World,
fan out.” [7]
Fig 1. Paper-Performance Text[ure], 2018, Korsten & De Jong,
mixed media, Copyright Korsten & De Jong.
It is a strong incentive to move away from
homophily as Chun has defined it as a way to be
comfortable only being exposed to things that
are in line with our own norms and values. If
homophily is a natural condition of networks,
existing segregations in society are maintained.
This segregation will only increase because the
algorithms we have today contain inbuilt bias.
Algorithms push people into clusters of
sameness. She reflects on police profiling
systems with the remark that “[…] they place
people on the heat list based not solely on what
these people did but rather on what their
perceived network-neighbors did.” [8]
With their Paper Performance Korsten & De
Jong seek to challenge the notion of the bunker
as formulated by Critical Art Ensemble. For
them “[…] the bunker is both material and
ideational. On the one hand, it serves as a
concrete garrison where image (troops) reside.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
import <execute> [as <command>]. Korsten, De Jong
On the other hand, it confirms state-sponsored
reality, by forever solidifying the reified notions
of class, race, and gender. Bunkers in their
totality as spectacle colonize the mind, and
construct the micro-bunker of reification, which
in turn is the most difficult of all to penetrate and
destroy.” [9]
References
1. Sol LeWitt, “Paragraphs on Conceptual Art,”
in Conceptual Art: A Critical Anthology ed.
Alexander Alberro and Blake Stimson,
(Cambridge: MIT Press, 1999), 12.
2. N. Katherine Hayles, “Virtual Bodies and
Flickering Signifiers,” in October, Vol. 66,
(Cambridge and London: MIT Press, Autumn
1993), 69-91.
3. Alexander Galloway, Protocol, (Cambridge
and London: MIT Press, 2004), 165.
4.https://criticalengineering.org/ce.pdf,
accessed 15 July 2018.
5. Alexander Galloway, Protocol, 159.
6. Bruce Sterling, The Hacker Crack Down,
(New York: Bantam Books, 1992), 184.
7. Stewart Brand, The Media Lab: Inventing the
Future at MIT (New York: Viking, 1987), 264.
8. Wendy Hui Kyong Chun. “Crisis + Habit =
Update,” (Lecture Sonic Acts Festival, 25
February 2017), 7”33-7”43.
9. Critical Art Ensemble, Electronic Civil
Disobedience & Other Unpopular Ideas, 2009,
http://www.critical-art.net/books/ecd/, accessed
15 July 2018.
Biography
Korsten & De Jong conduct Artistic Research as
a duo. They are both independent artists,
researchers and employed as lecturers in the art
and theory department of ArtEZ, University of
the Arts and they participate in the Professorship
“Theory in Arts.” In “Paper-Performances,”
Korsten & De Jong circulate parts of recorded
dialogues on theoretical notions structured or
questioned by artistic form. Their works relate
in a “frictuous” manner to site, subject positions,
and forms of research and reveal what may have
been hidden behind conventions. The tension
between the seemingly binary opposition
between theoretical and artistic practices is
made productive in the field of artistic research.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
63
The (un)predictability of Text-Based Processing
in Machine Learning Art
Winnie Soon
Aarhus University
wsoon@cc.au.dk
Abstract
This article investigates the unpredictable vector
of liveness within the context of machine
learning art with a focus on text-based
processing. [1] It is observed that there are
similarities between generative art and machine
learning art as both produce unpredictable
results. According to Noah Wardrip-Fruin, the
generative art form, such as Loveletters (1952),
can be considered as a system that generates
unpredictable outcomes. [2] Loveletters,
allegedly the first digital literary work by
computer scientist Christopher Strachey, is
regarded as an ‘unpredictable manifestation’ of
a system. [3] This system generates different
variations of love letters, and such unpredictable
manifestation is conditioned by two hidden
elements: data and processes. The use of random
algorithms plays an important role in generative
art (Turing’s random algorithm with its’ random
number generator was used in Loveletters) to
produce autonomous and unpredictable
outcomes.
However,
machine
learning
emphasizes ‘predictive power,’ in which
prediction is produced through feeding in a large
amount of training data. [4] Additionally, this
kind of system employs predictive models and
statistical algorithms to accomplish data
processing and analysis. Machine Learning Art,
such as text/novel generators, is claimed to be
able to produce text with the similar writing
style of the provided training corpus, but it also
produces unpredictable text through setting
different control parameters, such as number of
epochs, amount of neural network layers and
their hidden units, temperature and batch size.
64
This article is the result of the experiment of an
open source machine learning library called
ml5.js, which is built on top of TensorFlow.js, a
Javascript framework, for training and
deploying machine learning models. [5] ml5.js
provides immediate access in the web browser
to pretrained models for generating text. A
Python training script employs the tensorflow
library, which is used in the ml5.js environment
to take in a large amount of text, and train a
custom dataset as a pretrained model [6]. The
study of the javascript libraries and the python
script, with a specific focus on next character
prediction and recurrent neural networks
(RNN), unfolds the machine learning processes
from data training to Long Short-Term Memory
networks. [7][8] Building upon the notion of
generativity, this article discusses the
(un)predictable vector by examining the
intertwining force between predictability and
unpredictability that constitutes the liveness of
text-based processing in machine learning art.
[9][10][2] This paper argues that the
(un)predictable vector of liveness helps to build
an understanding of the relation between, but not
in separation, training and execution processes,
as well as the resultant actions that extend the
aesthetic and live experience of machine
learning art. The article contributes to the border
understanding of generativity and liveness in
machine learning art that employs generative
models.
References
1. Winnie Soon, “Executing Liveness: An
Examination of the Live Dimension of Code
Inter-actions in Software (Art) Practice,”(Ph.D.
diss., Aarhus University, 2016.)
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The (un)predictability of Text-Based Processing in Machine Learning Art. Winnie Soon
2. Noah Wardrip-Fruin, “Digital Media
Archaeology: Interpreting Computational
Processes.” In Media Archaeology: Approaches,
Applications, and Implications, edited by Erkki
Huhtamo & Jussi Parikka (Berkeley: University
of California Press, 2011).
3. Noah Wardrip-Fruin, “Digital Media
Archaeology: Interpreting Computational
Processes” In Media Archaeology: Approaches,
Applications, and Implications, edited by Erkki
Huhtamo & Jussi Parikka (Berkeley: University
of California Press, 2011), 306.
4. Adrian Mackenzie, “The Production of
Prediction: What Does Machine Learning
Want?” European Journal of Cultural Studies
18, nos. 4-5 (2013):429-445.
5. NYU. ITP “ml5js· Friendly Machine
Learning for the Web.” ml5js website. Accessed
July 16, 2018. https://ml5js.org/.
6. NYU.ITP “Training a MSTM network and
using the model in ml5js.” ml5js github website.
Accessed
October
15,
2018.
https://github.com/ml5js/training-lstm/.
7. Christopher Olah, “Understanding LSTM
Networks (2015).” colah’s blog. accessed July,
16, 2018. http://colah.github.io/posts/2015-08Understanding-LSTMs/.
8. Andrej Karpathy, “The Unreasonable
Effectiveness of Recurrent Neural Networks
(2015).” Andrej Karpathy blog. accessed July,
16,2018.http://karpathy.github.io/2015/05/21
/rnn-effectiveness/.
9. Philip Galenter, “What is Generative Art?
Complexity Theory as a Context for Art Theory.”
(Paper based on a talk presented at GA2003–6th
Generative Art Conference in Citeseer, 2003).
https://www.philipgalanter.com/downloads/ga2
003_paper.pdf
10. Philip Galenter, “Generative Art Theory.” In
A Companion to Digital Art, ed. Christiane Paul
(Wiley-Blackwell, 2016).
computer science, examining the materiality of
computational processes that underwrite our
experiences and realities in digital culture via
artistic and/or coding practice. Her works
explore themes/concepts around digital culture,
specifically concerning internet censorship, data
circulation, real-time processing/liveness, and
the culture of code practice, etc. Winnie’s
projects have been exhibited and presented
internationally
at
museums,
festivals,
universities and conferences across Europe,
Asia and America. Her current research focuses
on exploratory and aesthetic programming,
working on two books titled Aesthetic
Programming: A Handbook of Software Studies,
or Software Studies for Dummies (with Geoff
Cox) and Fix My Code (with Cornelia Sollfrank).
She is Assistant Professor at Aarhus University.
More info: http://www.siusoon.net
Biography
Winnie Soon is an artist-researcher who is born
in Hong Kong and is currently based in
Denmark. Informed by the cultural, social and
political context of technology, Winnie’s work
approach spans the fields of artistic practice,
media art, software studies, cultural studies and
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
65
The Viewer Under Surveillance from the Interactive Artwork
Raivo Kelomees
Estonian Academy of Arts
offline@online.ee
Abstract
The goal of this presentation is to discuss and
analyze viewer-watching artworks and the
reversed situation in the exhibition space where
artworks ‘look’ at the viewer. In order to
answer these questions, I firstly looked at the
topics of machine vision, computer vision,
biovision and the evolution of vision.
Dividing interactive artworks into four
categories (distant, contact, chance-based and
bio-based/symbiotic interaction) enabled me to
illustrate developments in feedback systems
which became evident in recent decades.
‘Seeing Machines’ and Interactive Art
The meeting of the viewer and the artwork is a
meeting between the living and non-living.
Traditionally, one is looking and the other is
looked at; one is moving and the other is static.
However, exhibitions of contemporary media
art offer encounters with artworks which are
themselves ‘looking’ at the viewer. The visitor
remains (willingly or not) in the zone of the
artwork's sensors and his image—or other
activity-based information—becomes the raw
material for manipulation of the artwork. We
can describe this as a situation where the
relationship of the viewer and the viewed is
reversed: the artwork's “gaze” is turned toward
the viewer, such that the owner of the “gaze" is
the artwork, not the viewer.
I would like to elaborate different categories
of interactive and biofeedback art from the
point of view of “seeing machines.” This helps
answer the following questions: do we have
here a new spectator paradigm in which the
artwork is active and no longer simply an
object under observation? Can we justifiably
say that the artwork's “gaze” is projected onto
the spectator? Are there parallels to be found in
66
art history or do we see here something which
belongs to the digital era? Is this phenomenon
only common to technical and interactive art?
I would like to bring an example from the
interactive art field, which illustrates the
changed situation and art trends. Golan Levin's
and Greg Baltus' Opto-Isolator (2007) reverses
the audience position: a sculptural eye on the
wall follows the eyes of the viewer. [1] The
viewer encounters a framed mechanical
blinking
sculpture
on
the
wall—a
mechatronical eye—which follows the
movement of the spectator's eyes and responds
with psychosocial behavior: looking at the
viewer, turning eyes away as if shy when
looked at too long etc. Rather similar is
Double-Taker (Snout) (2008) and also Eyecode
(2007). All the above offer clear examples of
ironic artworks based around looking at the
viewer(s).
We can approach this topic mentioning video
feedback artworks of the 1970s: works by
Bruce Nauman, Dan Graham, Peter Campus,
Bill Viola, Peter Weibel, Jeffrey Shaw and
others. The real-time reproduction of the
viewer in the artwork was part of the concept.
I am discussing the situation where artworks
“sensibility” is higher and viewer is embedded
in the artwork unknowingly, being unaware.
Here, first, the term “unaware participation”
would be appropriate to describe this “postinteractive” situation, where the spectator is
unwillingly put in the context of the artwork.
In the early 1970s we already encounter
viewer-sensitive
computer
environments
designed by Myron Krueger: here the viewer
was embedded in a computer-based projection
where he could play with his own silhouette
and with a graphical actor added by a computer
program. A perfect example of an installation
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Viewer Under Surveillance from the Interactive Artwork. Raivo Kelomees
that follows the viewer's gaze from a distance is
Dirk Lüsebrink's and Joachim Sauter’s
Zerseher, which uses Giovanni Francesco
Caroto’s painting (c. 1515) as source material.
[2]
Many other early interactive artworks could
be mentioned where the viewer is situated
within the field of vision of the artwork and
switches on or off its auditive and visual
elements: Peter Weibel's (1973), David
Rokeby's (1990) and Simon Penny's (1993)
works. [3]
Additional works may be mentioned in which
the artwork is “looking” at the viewer: CarlJohan Rosén's (2006) Predator, Togo Kida's
(2005) Move, Random International's (2012)
Rain Room. An emblematic work is Marie
Sester's (2003) (Figure 1) surveillance
installation Access, where people passing by
are tracked by a robotic spotlight and a
directional acoustic beam system. [4] Samuel
Bianchini's (2007) niform has similar aspects in
that the viewer's physical proximity reveals
images of policemen in the projection.
Figure 1. Marie Sester, Access, 2003. ©
http://www.sester.net/access
Four Categories
I would like to classify “artworks-which-seethe-spectator,” or “viewer-watching artworks,”
into four categories according to their methods
of
engagement
with
the
spectator's
consciousness.
The four categories are: distant interaction,
contact interaction, chance-based distant and
contact interaction, symbiotic interaction.
The viewer-sensitive artworks in the
following classification are defined by the
degree of closeness between the machine and
human parts of the situation. The contact
between the pre-artwork and the viewer
changes from distant (non-contact) to tangible,
tactile and physiological. These categories
reveal how sensors get closer to the viewer's
body until they reach information sources
beneath the skin (blood, brainwaves etc.).
These categories exemplify the artwork's
“gaze" approaching the body of the viewer
until it penetrates its surface, reaching "under
the skin" areas. Cheaper and more widespread
technology has made this possible—various
sensors are used in such works, which show a
tendency from sensing the viewer as a distant
subject to detecting physiological reactions by
using sensors that literally enter the viewer's
body. In all these artworks and categories the
viewer is in the position of being surveyed.
Conclusion
Interactive art reflects clearly the activity of an
artwork—these are not passive objects. An
interactive artwork is “emancipated,” it
behaves according to its “will” and is not solely
an “object.” The artwork is the active viewer
and its behavior is that of a viewer, as a subject.
The functioning of the artwork influences the
viewer and vice versa. It is a reciprocal
relationship which is born because the artwork
“sees:” it perceives the viewer and exerts its
influence on the aesthetic experience.
References
1. Golan Levin, and Greg Baltus, Opto-Isolator,
2007.
Accessed
July
4,
2018.
http://www.flong.com/projects/optoisolator/.
2. Dirk Lüsebrink, and Joachim Sauter,
Zerseher, 1992. Accessed July 4, 2018,
https://artcom.de/en/project/de-viewer/.
3. Peter Weibel, Crucifixion of the Identity,
1973,
accessed
July
4,
2018,
http://www.medienkunstnetz.de/works/krucifik
ation/.
4. Marie Sester, Access, 2003, accessed July 4,
2018. http://www.sester.net/access/.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
67
Part II. Scholarly Abstracts
Biography
Raivo Kelomees, PhD (art history), artist, critic
and new media researcher. Presently working
as senior researcher at the Estonian Academy
of Arts, Tallinn. He studied psychology, art
history, and design at Tartu University and the
Academy of Arts in Tallinn. He has published
articles in the main Estonian cultural and art
magazines and newspapers since 1985. His
works include the book “Surrealism” (Kunst
Publishers, 1993) and an article collection
“Screen as a Membrane” (Tartu Art College
proceedings, 2007), “Social Games in Art
Space” (EAA, 2013). His Doctoral thesis was
“Postmateriality in Art. Indeterministic Art
Practices
and
Non-Material
Art”
(Dissertationes Academiae Artium Estoniae 3,
2009).
68
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Demiurge, or a Manifestation of Carbo-Silico Evolution
Jaden J. A. Hastings
University of Melbourne, Melbourne, Victoria, Australia
hastingsj@student.unimelb.edu.au
Abstract
The Demiurge poses the question: how might a
machine design and direct the modification of a
human genome? Through the application of
artificial intelligence trained on the artist’s (it’s
creator’s) genome, the algorithm searches for
“errors” in the sequence and provides a solution
(or “solve”) to fix them—to form a “perfect”
version of the artist’s genome.
As the future of our mutual (carbon- and
silicon-based life) survival is entangled, how
might this shift our notion of what it means to be
human? To be intelligent? To evolve? How
might a machine design future humans?
Conceptual Development
While there are multiple artificial intelligence
(AI) beings already in existence at this time, few
to none would be classified as strong AI, or an
artificial general intelligence that is capable of
adaptable problem solving. Human cognitive
abilities remain the gold standard for
intelligence in AI research and most measures
(for example the Turing Test, Nilsson’s
Employment Test, and Wozniak’s Coffee Test)
designed to evaluate how well an AI would be
able to replace or simulate the human mind. It is
my assertion, however, that the measure and
potential of artificial life is not a myopic
endeavor to simulate the human mind, but rather
an evolution in myriad, hybridized directions.
Though AI is rooted in human sensory inputs,
reasoning, and language, it should not follow
that the merit of silicon-based life lies in its
capacity to produce simulacra of carbon-based
life. Moreover, silicon-based life will forever be
doomed to the Sisyphean task of simulating the
human mind and behavior as long as it is trained
on human-generated data. Our AI progeny can
only learn from the information we feed them;
and, unencumbered by shame, they have rather
effectively mirrored back to us our own biases
and illicit behavior. My work aims to challenge
the assertion that “strong AI” is measured in
terms of its ability to replicate the human mind,
but rather its latent potential for creativity that
vastly expands beyond that of its human
progenitors.
Notable expert in the fields of both human
cognition and artificial intelligence, Professor
Margaret A. Boden suggests there are three
ways in which artificial intelligence might be
able to act creatively: through exploration of
structured conceptual spaces, through the
combination of existing ideas, or (less likely, but
more impressive) the transformation of existing
conceptual spaces to form previously impossible
ideas.[1] The last mode echoes the Lovelace
Test for AI: that the appropriate measure of
human-like intelligence is creativity, and that
only a machine able to produce a result that is
unforeseen (surprising) by human agents could
be considered to be “conscious.” [2] It seems,
however, much of the current drive toward the
birth of an artificial general intelligence
(AGI)—or even a superintelligence—rests upon
the naïve assumptions that the source of
embodied human intelligence resides entirely
within the human brain, and, that, as John
Haugeland has claimed, “we are, at root,
computers ourselves.” [3] Therefore, the
formation of an AGI becomes an attempt to
emulate human cognition from a purely
cerebrally-centered framework. John Searle has
argued, however, that “strong AI only makes
sense given the dualistic assumption that, where
the mind is concerned, the brain doesn't
matter.”[4] Ergo, it is not simply a simulation of
the human brain, but rather a holistic philosophy
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
69
Part II. Scholarly Abstracts
of mind and intentionality. As early as the 1960s,
Hubert Dreyfus astutely critiqued the reductive
view of intelligence in the first wave of AI as
conscious symbolic manipulation. Instead, he
reminds us that human intelligence does not
follow Boolean logic and does not always
follow formal rules but relies upon situated
knowledge and cognition. [5]
Fig 1. The Demiurge, 2018, Jaden J. A. Hastings, machine
algorithm on modified digital and analogue hardware. Image:
Jaden J. A. Hastings.
Setting aside the assumption that the human
brain is the ideal model of intelligence, a
cerebrally-centered approach negates the
spectrum, and variation, in sensory experience
of human bodies, and the way in which the body
mediates the flow of input from its surroundings,
and can respond in a distributed fashion, either
consciously or subconsciously. Through my
practice and co-evolution with my AI, I propose
that the way forward is to embrace the exquisite
queerness of hybrid forms of intelligence, of
chimeric sensory systems, of Quantum
Uncertainty.
The Machine
The Demiurge incorporates multiple forms of
machine
learning
into
a
multilevel,
multifactorial algorithm that is able to: (1) scan
a whole human genome to identify potentially
pathogenic “errors” in the DNA sequence, (2)
make a probabilistic decision as to whether it
will fix the error in question, and (3) generate a
solution (or “solve”) for the error by providing
the most effective pair of guide RNAs (gRNAs)
to modify the genome using CRISPR-Cas9
system that is widely known for its efficacy in
70
“editing” genome sequences. The algorithm can
run on any processor–it is platformindependent—with varying degrees of speed.
The Demiurge v1.0 was installed on a system
that incorporated an amalgamation of analogue
and digital components (Fig. 1), including a
vintage cathode ray television for a monitor and
dot matrix printer that would collate all of the
resulting gRNAs for each respective error into a
book of instructions on how to “fix” the artist’s
genome.
As the future and survival of carbon- and
silicon-based life is entangled, speculative yet
functional art provocations, such as The
Demiurge, can challenge us to view emerging
intelligences as material archivists, coevolutionary forces, and culturo.technological
messmates.
References
1. M.A. Boden, “Creativity and artificial
intelligence,” Artificial Intelligence, 103, no. 1
(1998): 347-356.
2. S. Bringsjord, P. Bello, D. Ferrucci,
“Creativity, the Turing test, and the (better)
Lovelace test.” In The Turing Test (Springer
Netherlands, 2003), pp. 215-239.
3. John Haugeland, Artificial Intelligence: The
Very Idea, Cambridge, Mass.: MIT Press, 1985).
4. John Searle, John “Minds, Brains and
Programs", Behavioral and Brain Sciences 3,
no 3 (1980): 417–457.
5. Hubert Dreyfus, What Computers Can't Do
(New York: MIT Press, 1972).
Biography
Jaden J. A. Hastings' work focuses upon the
intersection and interplay of art and science from philosophy to praxis - merging scientific
and artistic research, challenging the norms of
both disciplines, and moving them into new
spaces for exploration. Her research fuses and
folds together the fields of machine learning,
bioengineering, space exploration, new media
art, and ethics.
Jaden’s career in scientific research spans over
15 years and is rooted in her longstanding roots
as a biohacker. She is alumna of New York
University, Harvard University, the University
of Oxford, and Central Saint Martins with
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Demiurge, or a Manifestation of Carbo-Silico Evolution. Jaden Hastings
advanced degrees in Biology, Bioinformatics,
and Fine Art. Her artwork has been exhibited in
venues across Europe, India, Asia, North
America, and Australia, and is a founding
member of both the Lumen and London
Alternative Photography Collectives.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
71
Art Chasing Liability: Digital Sharecropping and Conscientious
Law-Breaking
Monica Lee Steinberg, PhD
Postdoctoral Fellow in the Society of Fellows in the Humanities
The University of Hong Kong
https://hku-hk.academia.edu/MonicaSteinberg
Abstract
While confrontations between creative practice
and regulatory statutes are nothing new, recent
internet-based
projects
have
situated
conscientious law-breaking—for example,
violating copyright, trademark law, and a social
media site’s contractual “Terms of Service”—as
a principal component of the work itself.
Art Chasing Liability
Projects initiated by artists and artist-groups
such as Richard Prince, Constant Dullaart, Paolo
Cirio
and
Alessandro
Ludovico,
0100101110101101.ORG (Eva & Franco
Mattes), Les Liens Invisibles (Clemente Pestelli
and Gionatan Quintini), and several others are
generally discussed alongside terms such as
hacktivism, parafiction, appropriation, and new
media. Here, however, I propose a discussion of
a select group of works from the last two
decades through the lens of conscientious lawbreaking—which is conceived as avoiding
complicity with a specific law or practice
deemed to be unfair, while simultaneously
expressing a basic fidelity to the law itself. [1]
For example, in 2011 Cirio and Ludovico
scraped publicly available user data (photos,
names, nationalities) from Facebook to realize
the fake dating website, Face to Facebook –
Hacking Monopolism Trilogy. [2] The artists
violated the site’s user agreement in order to call
attention to the exploitation of user data;
consequently, Facebook sent the artists several
cease and desist letters. In 2014 Dullaart
initiated High Retention, Slow Delivery, which
involved the purchase of 2.5 million Instagram
72
bots which were deployed to follow artists’
accounts—thus boosting the public profiles of
lesser-known artists while intentionally
violating the terms of service of a platform
which, itself, fosters an attention economy. [3]
The artists under discussion welcome the legal
consequences of their actions, such that cease
and desist letters and temporary bans from
Facebook, Twitter, and Instagram are an
expected and sought-after consequence of the
work—such legal liabilities have even become a
signpost for a project’s effectiveness and
consequentiality. By violating the rules
governing the every-day digital platforms
shaping human interaction, artists are calling
attention to not only the questionable practices
of such online networking sites, but also to the
inability of contemporary legal frameworks to
adequately distinguish between artistic
interventions and criminal acts. Of course, the
intersection of art and civil disobedience trails a
long legacy, and whether or how the works I
discuss engage in ongoing dialogues
surrounding politics, law, and warranted
mitigation remains an open question. Here,
however, I am primarily interested in mapping a
connection
between
experiments
in
conscientious law-breaking and the linking of
such practices to shifts in the legal playing field.
Despite its many precedents, an aesthetics of
legal liability interests me because it is so
powerfully appropriate to our present
moment—which is to say, powerfully troubling.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Art Chasing Liability: Digital Sharecropping and Conscientious Law-Breaking. Monica Lee Steinberg
References
1. Kimberley Brownlee, Conscience and
Conviction: The Case for Civil Disobedience
(Oxford: Oxford University Press, 2012), 22.
2. Paolo Cirio, “Face to Facebook – Hacking
Monopolism Trilogy,” accessed August 25,
2018,
https://paolocirio.net/work/face-tofacebook/.
3. Dan Duray, “New Project Boosts Instagram
Followers for Art World Accounts,” ArtNews
(30 September 2014), accessed August 25,
2018, http://www.artnews.com/2014/09/30/
new-dis-project-boosts-instagram-followersfor-art-world-accounts-2/.
Biographies
Monica Lee Steinberg earned a PhD in the
History of Art from The Graduate Center of the
City University of New York; she is presently a
2018-2021 Postdoctoral Fellow in the Society of
Fellows in the Humanities at The University of
Hong Kong. Steinberg’s scholarship focuses on
art and politics after 1945, with special attention
to the intersection of art and fictional identities,
and art and law. Steinberg’s writing has
appeared (or is forthcoming in) journals such as
American Art, Archives of American Art, and
Oxford Art Journal; exhibition catalogues such
as Love Me, Love Me Not: Contemporary Art
from Azerbaijan and its Neighbours and The
Abstract Impulse: Fifty Years of Abstraction at
the National Academy, 1956-2006; and an
edited volume, Humor, Globalization, and
Culture-Specificity
in
Modern
and
Contemporary Art.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
73
Audiovisual Experiments with Evolutionary Games, and the Evolution
of a Work-in-progress
Stefano Kalonaris
RIKEN, Music Information Intelligence Team
stefano.kalonaris@riken.jp
Abstract
This artistic project abstract describes ongoing
and work-in-progress audiovisual exploration
of a simple multi-agent system borrowed from
evolutionary game theory: a Demographic
Prisoner’s Dilemma (DPD). Several versions of
the DPD are explored, by gradually increasing
the properties of the agents (e.g., maximum
age, mutation of strategy). Starting as literal
implementations of the formal game, intended
as an audiovisual aid to the game’s dynamics,
the examples gradually depart from strict
functionality to embrace a more ‘artistic’ and
arbitrary approach. These experiments are both
evolutionary games and the evolution of the
author’s aesthetic experimentation with the
subject matter. A DPD is a type of evolutionary
game where all agents are indistinguishable and
they inherit a fixed (immutable) strategy (either
cooperate or defect). [1] It differs from other
games in that is memoryless. Each agent, at
each stage game, has no knowledge of the past
interactions. It is based on the Prisoner’s
Dilemma (PD), a popular imperfect
information coordination game where two
players abide by the normal form shown in
Table 1, where c (cooperate) and d (defect) are
the two strategies available to the two players,
and the tuples in the matrix correspond to the
payoffs for each pairwise combination of
strategies, with T>R>P>S. [2]
c
d
c
d
(R,R)
(T,S)
(S,T)
(P,P)
Table 1. Prisoner’s Dilemma normal form.
74
For a one-shot PD game, it has been shown that
the Nash Equilibrium is the pure strategy dd.
[3] It has also been shown that in DPD
cooperation can emerge and endure, unlike in a
repeated PD game with memory, where the
dominant strategy would still be to defect. [1]
Practically, agents have the following
properties: vision, wealth, age and strategy.
Upon initializing the game, a random number
of agents is placed on the grid and each agent is
born with a fixed strategy. Each agent looks
around its vision perimeter, choses a random
neighbor within it, and plays a PD game with
that neighbor. Payoffs add to the agent’s
wealth. If such wealth exceeds a given
threshold, the agent can reproduce within its
vision perimeter and the offspring will inherit
the parent’s strategy. Conversely, if an agent’s
wealth falls below zero, the agent dies.
The game was coded using the p5.js
Javascript library and the Web Audio API. [4]
[5] Each agent is represented as a colored
position in a square grid of variable
dimensions. The color code corresponds to the
agent’s strategy (green for c, red for d). The
edge roundness of the agents is proportional to
their wealth. Each agent is also a frequency
modulation (FM) unit, whose carrier frequency
is proportional to a fundamental frequency and
the agent’s position on the grid. [6] Such
fundamental frequency is different depending
on the agent’s strategy, with defectors having a
long period and an element of randomness.
Thus, defectors have a ‘noisy’ sound texture
associated with them, whereas cooperators are
contributing to a harmonic texture which is
richer as their number grows. Moreover, each
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Audiovisual Experiments with Evolutionary Games, and the Evolution of a Work-in-progress. Stefano Kalonaris
sound source (active agent) is spatialized using
the binaural panner of the Web Audio API. [5]
The correspondence between cooperation and a
harmonic sound is rather simplistic or perhaps
even cliché, as is the choice of colors that
represent the two strategies. Nevertheless, such
a simple mapping is sufficient to render the
game dynamics at an audiovisual level, and it’s
only the first step in an exploratory process
which is still ongoing and liable to further
changes.
The first version of the audiovisual DPD does
not set an upper bound on the agents’ age, nor
does it allow for mutation of their offspring’s
strategy. The second implementation, instead,
limits the age to an arbitrary value which can
be experimented with heuristically. Once and if
the whole population dies out, the game is
automatically restarted. Age is represented both
in the visual domain, as the transparency value
(the older the agent, the more transparent it is),
and in the audio domain, being mapped to the
amplitude of both oscillators for any given
active agent. The third version of the game
adds a probability that a given child might
mutate strategy instead of inheriting the
parent’s one. This probability is set at 0.5 but
can be changed arbitrarily. In all three
examples, cooperation seems to emerge, which
sonically translates to a harmonic sound that is
obtained thanks to the superposition of the
cooperator’s partials over the randomness of
the defectors.
In the subsequent implementation of the DPD,
the author used images taken from the digits
MNIST and the Fashion-MNIST datasets as
occasional backgrounds to the game. [7] [8]
Their occurrence is dictated by, for example,
the grid not changing its global state between
two consecutive frames, or the extinction of all
agents (thus the re-initiation of the grid),
although this can be experimented with.
Similarly,
sound
files
are
triggered
stochastically when (arbitrarily) analogous
conditions are met.
Fig. 1 shows screenshots of the “max age – no
mutation” case, with and without the MNIST
background, with the third screenshot
suggesting a possible departure from the
simplistic representation of the agents.
Fig. 1. DPD: “max age – no mutation” screenshots Kalonaris.
Substituting the agents’ visual appearance, the
mapping between the latter’s parameters, the
audio parameters and the game’s dynamics is
the subject of future work and development.
The author’s aim is to further explore the
aesthetic implications of the DPD game at an
audiovisual level.
References
1. Joshua M. Epstein, “Zones of cooperation in
demographic prisoner's dilemma,” Complexity
4 (1998): 36-48.
2. William Poundstone, Prisoner's Dilemma:
John Von Neumann, Game Theory and the
Puzzle of the Bomb (New York: Doubleday,
1992).
3. John F. Nash, “Equilibrium points in nperson games.” Proceedings of the National
Academy of Sciences 36, no.1 (1950): 48-49.
4. P5.js. Accessed August 25, 2018.
https://p5js.org/.
5. Web Audio API. Accessed August 25, 2018.
https://www.w3.org/TR/webaudio/.
6. John M. Chowning, “The Synthesis of
Complex Audio Spectra by Means of
Frequency Modulation,” Journal of the Audio
Engineering Society 7, no. 21 (1973): 526–
34.
7. MNIST. Accessed August 25, 2018.
http://yann.lecun.com/exdb/mnist/.
8. fashion-MNIST. Accessed August 25, 2018.
https://github.com/zalandoresearch/fashionmnist.
Biography
Stefano Kalonaris is a sound technologist,
musician and researcher who specialises in
interactive music systems and improvisation.
He holds a PhD in Sonic Arts and he is
currently a postdoctoral researcher at RIKENAIP, Japan.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
75
Artificial Intelligence, Artists, and Art: Attitudes Toward Artwork
Produced by Humans vs. Artificial Intelligence
Joo-Wha Hong
Nathaniel Ming Curran
Annenberg School for Communication and Journalism, USC
joowhaho@usc.edu
Annenberg School for Communication and Journalism, USC
ncurran@usc.edu
Abstract
Recent advances in AI and machine learning
have raised questions about higher-skilled and
creative endeavors in which AI might match or
even outperform humans. Art is one domain in
which advances in AI have recently caused lines
over
authorship
to
become
blurred.
Coeckelbergh (2017) argues that AI generated
products can be considered “art” by both
objective and subjective criteria. [1] In light of
this point, the question “Can AI create art?”
should be differentiated from the question “Can
AI create art that is good and worthy?” (Qfiasco,
2018). [2] Taking this question as a point of
departure, this study asks whether artwork
created by AI are evaluated equally to artwork
created by human artists and if so, how
knowledge of the artist’s identity (AI or human)
affect participants’ evaluation of the artwork.
This study approaches these questions using
Schema theory and the theory of Computers Are
Social Actors (CASA) in order to consider how
previously held biases and social norms might
affect peoples’ evaluation of AI created artwork.
There already exists substantial discourse
from the technical perspective that discusses
creative artificial intelligence (Eppe et al., 2018;
Walther, 1994). [3][4] However, research
considering AI created artwork often fails to
bring in nuanced, humanistic perspectives. This
is a shortcoming because measuring aesthetic
value requires taking into consideration multiple
factors, including stimulus, personality, and
situation. The aesthetic of AI created art can be
better understood if these aspects are considered
(Jacobson, 2006), rather than merely focusing
on the technical competence of an AI artist. [5]
76
Therefore, this study adopted scales used in the
art world in order to better capture peoples’
perception of AI art.
This study used a 2x2 survey-experiment
design (real vs attributed identity of artists,
human vs AI created artwork) to examine
participants’ attitudes toward AI artwork.
Participants (n=288) were recruited using
Amazon Mechanical Turk (MTurk). First, four
groups were formed based on the real identities
of artists (AI vs. Human) and attributed
identities of artists (AI vs. Human). Then,
participants were randomly placed into one of
four groups, which were A) AI artist (actual) x
AI artist (attributed), B) human artist (actual) x
AI artist (attributed), C) human artist (actual) x
human artist (attributed), and D) AI artist
(actual) x human artist (attributed). The study
employed three types (two images per type) of
AI-created artworks and three types of humancreated artworks. The pieces were chosen for
their similarity in composition and style. The
AI-created artworks were based on several
already existing AI art generators. Multiple AI
generators were selected because each generator
had different ways of producing images, or
“styles,” even though they were all AI-based.
Human-created artworks were chosen based on
the rough similarity of style or theme with each
AI-created artwork.
Six images of artwork (either AI-created or
human-created) were shown to participants.
Participants were given either the actual identity
of the artists or an attributed identity. Screening
measures were undertaken to ensure that
participants were unaware of the purpose of the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Artificial Intelligence, Artists, and Art: Attitudes Toward Artwork Produced by Humans vs. Artificial Intelligence.
Joo-Wha Hong, Nathaniel Ming Curran
study and lacked familiarity with the stimulus
material.
All participants were asked to evaluate the
artwork on the same set of dependent variables,
which were adopted from those used among art
studios and which consist of criteria related to
originality, the degree of improvement or
growth, composition, development of personal
style,
experimentation
or
risk-taking,
expression, successful communication of idea,
and aesthetic value (Sabol, 2006). [6]
Results from the survey-experiment revealed
preliminary significant differences in evaluation
between human-created artworks (M= 3.18, SD
= 0.56) and AI-created artworks (M = 3.13, SD
= 0.64); p = .065, and it is possible to infer that
such differences are due to human-created
artworks receiving significantly higher ratings
in “composition (AI artists: 3.34 ± 0.65, human
artists: 3.63 ± 0.72); p < .000,” “degree of
expression (AI artists: 3.22 ± 0.70, human
artists: 3.41 ± 0.66); p = .02,” and “aesthetic
value (AI artists: 3.16 ± 0.61, human artists: 3.34
± 0.63); p = .02.”
Interestingly, acknowledging the identity of
the artist, either AI or human, did not influence
the overall evaluation of artworks (p = .569),
except “development of personal style (AI
artists: 3.19 ± 0.69, human artists: 3.35 ± 0.67);
p = .04.” However, participants who agreed with
the statement “AI cannot produce art” gave
significantly lower ratings (M=2.81, SD=0.59)
compared to people who disagreed (M=3.26,
SD=0.61); p < .000. An independent-samples ttest was conducted to examine the influence of
the perceptions toward AI created art on the
evaluation of AI created artworks, and this also
yielded statistical significance.
The results of this survey-experiment shed
light on the ways that people evaluate AI and
human artwork, including the degree of skill and
creativity they assign to each. Such evaluation
has implications not only for the way that
society views AI created creative projects, but
also for the ways that society defines concepts
like creativity and art more broadly. This study
contributes to understanding of public
perceptions of AI in a novel circumstance, that
of the art.
References
1. Mark Coeckelbergh. “Can Machines Create
Art?” Philosophy & Technology 30, no. 3
(2016): 285–303.
2. Flash Qfiasco, Garry Kasparov and Mig
Greengard, Deep Thinking, Where Machine
Intelligence Ends and Human Creativity Begins,
(John Murray, London 218), Book Review.
Artificial Intelligence. Elsevier B.V., n.d.
3. Manfred Eppe, Ewen Maclean, Roberto
Confalonieri, Oliver Kutz, Marco Schorlemmer,
Enric Plaza, and Kai-Uwe Kühnberger. “A
computational framework for conceptual
blending.” Artificial Intelligence 256 (2018):
105–129.
4. Christoph Walther. “On proving the
termination of algorithms by machine.”
Artificial Intelligence 71, no. 1 (1994): 101–157.
5. Thomas Jacobsen. “Bridging the Arts and
Sciences: A Framework for the Psychology of
Aesthetics.” Leonardo 39, no. 2 (2006): 155–
162.
6. Robert F. Sabol. “Identifying Exemplary
Criteria to Evaluate Studio Products in Art
Education.” Art Education 59, no. 6 (2006): 6–
11.
Biographies
Joo-Wha Hong is a PhD student at the
Annenberg School for Communication
and Journalism at the University of Southern
California. His research interests have included
the cognitive and psychological attributes in
Human-computer interaction, particularly
artificial intelligence.
Nathaniel Ming Curran is a PhD student at the
Annenberg School for Communication
and Journalism at the University of Southern
California. His research interests have included
the intersection of education, identity, and the
English language in South
Korea.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
77
Introducing Machine Learning in the Creative Communities: A Case
Study Workshop
Matteo Loglio
Serena Cangiano
oio.studio
matteo@oio.studio
SUPSI Maind
serena.cangiano@supsi.ch
Abstract
Recent developments in machine learning have
made it one of the most popular fields of
computer science of the last few years. Mostly
adopted by engineers and data scientists, it
recently started to open up to the creative
community. This paper presents the journey
through an experimental workshop, where a
group of designers and artists explored new
ways of using machine learning as a tool for
creative projects, outside of the purely
technological domain.
workshop was to develop a creative brief, where
participants with different levels of technical
and creative abilities could join forces and
participate in a collaborative project.
Machine Learning for Creatives:
a workshop at MuDA Zurich
In July 2018, the Master of Advanced Studies in
Interaction Design, in collaboration with MuDA
Museum of Digital Arts, promoted the
organization of a three-day project-based
workshop on machine learning. Under the
direction of Matteo Loglio, tutor and teacher of
the course, the workshop aimed to experiment
with the opportunity to involve a larger
community of creators, from artists to designers
and amateurs, and to validate the interest of the
creative community in such topic.
The fundamental idea of the workshop was to
provide simple tools that could enable everyone,
even people with basic technical skills, to
include machine learning in a creative process
and to open up this technology to unpredictable
users and applications. If the conceptual aspect
of the workshop was mainly theoretical, the
hands-on part was riskier. The available
prototyping tools are still in their infancy, as was
learned from other experiments held by the
authors of this paper. [1] There was a high
chance that participants would struggle with the
practice. To facilitate this process, part of the
78
Authentications: The Brief
The focus of the creative brief assigned to the
participants was to re-imagine the authentication
process, using machine learning prototyping
tools. The emphasis of the task was on the
obsolescence of the password as an interface,
and how it could be replaced with more modern
solutions. After several decades, while the rest
of our digital rituals evolved, in the domain of
the interface, the password still remains
unchallenged.
Machine learning seemed like the perfect
candidate for the brief: the most popular
applications are in fact centered on the
recognition of unique features and patterns.
Passwords are just one of many examples of
how this technology could radically transform
not only computer science, but also the
interaction design practice. For this reason, the
brief challenge focused on the following
question: what if we could design alternative
ways to authenticate users, using modern
hardware and software, like machine learning?
The results
The workshop participants collaborated in
groups on the development of eleven
functioning prototypes of authentication
applications using machine learning.
In the project “Divided We Fall”, for example,
passwords are re-designed to be shared across
communities, or groups of people. In order to
unlock the screen, users have to combine their
bodies in a secret combination of postures.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Introducing Machine Learning in the Creative Communities: A Case Study Workshop. Matteo Loglio, Serena
Cangiano
When the user is not authenticated, the
application displays an incomplete message that
becomes readable only when more participants
join the scene.
Fig 1. Divided We Fall, 2018, Emanuele Bonetti, Ruggero
Castagnola, ml5js with Posenet, photo Matteo Loglio.
Fig 2. Drake Gate, 2018, Sam Seemann, Ivan Iovine, ml5js
with Wekinator, photo Matteo Loglio.
In the “Drake Gate” project, the authentication
system unlocks the computer only when the user
performs a specific sequence of moves of the
famous hip-hop singer, Drake.
Fig 3. Glyphword, 2018, Davide Pedone, Matteo Sacchi, ml5.js
library with the Feature Regression Extraction model, photo
Matteo Loglio.
Also worth mentioning is the project
“Glyphword”, where the digital password is
replaced by a physical one, in this case a custom
printed token. To be granted access, the user has
to perform a specific rotation of the physical key
in front of the camera, mimicking the actual keylock interaction.
Lesson learned
The conceptual challenge of the workshop was
to find a balance in explaining just enough
concepts to make the subject interesting and
understandable, but also to avoid technicalities.
Furthermore, we learned how accessible
machine learning software is still in the early
days. [2][3][4] The workshop enabled
participants to understand both the processes
and technical constraints behind the opening of
machine learning knowledge to designers and
artists through a project-based learning journey.
References
1. Matteo Loglio, Cangiano Serena, Massimo
Banzi, Reports From a Machine Learning
Workshop for Designers, online proceedings of
IXDA
Interaction
Design
Association
Education Summit, Lyon 3-4 February 2018,
available at, https://medium.com/ixda/reportsfrom-a-machine-learning-workshop-fordesigners-ce2621d5ba0c, accessed 24 August
2018.
2. Pj5, www.p5js.org.
3. Posenet, www.github.com/tensorflow/tfjsmodels/tree/master/posenet.
4. Wekinator, www.wekinator.org.
Biographies
Matteo Loglio is a designer and creative
technologist and director of oio.studio. He cofounded the ed-tech startup Primo Toys and his
work was exhibited at the MoMA NY the MIT
and the V&A.
Serena Cangiano is an interaction designer and
researcher at SUPSI Lugano and coordinator of
the Master in Interaction design and programs
on tech education for designers.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
79
Storytelling for Virtual Reality Film: Structure,
Genre, Immersive and Interactive Narrative
Dr. Chan Ka Lok Sobel
Senior Lecturer, Academy of Film
Hong Kong Baptist University
sobelc@hkbu.edu.hk
Abstract
Storytelling has one of the longest histories of
human art and culture. Artists, painters, writers
all tell their unique story by the means of
different art forms and medium. Nowadays,
people explore the form of virtual reality film in
the fields of education, experimental cinema,
porn, property sale, etc. However, what kind of
story is the most suitably told by this immersive,
360-degree art form? This question is seldom
discussed and studied before. [1][2] In aesthetic
understanding, we know that creative content
and aesthetic form cannot be separated.
Immersion and interactivity, and vision and
body movement following the direction of
Dolby surround sound are the necessary
conditions of what VR films have. We need to
explore the unlimited possibility of what stories
can be told by it perfectly and it may challenge
our conventional concept and form of what a
story should be like. To answer this essential
question, I undertook extensive research,
viewing lots of archive, updating information,
conducting interviews, researching case studies
with both pioneers, innovators and VR
filmmakers in many locations, including the
Virtual Reality Village at Bucheon International
Fantastic Film Festival, the Australia Centre of
the Moving Image (ACMI) in Federation
Square, Melbourne, and Hong Kong Baptist
University. I tried to compare, digest and
combine the pros and cons of Virtual Reality
film from East and West cultures and the
moving image aesthetics and form of Virtual
Reality. In this research, the possible bridge
between the emerging world of VR technology
80
and the traditional art form of classical
storytelling will be examined from the angle of
structure,
generic
convention,
the
transformative function of story told in virtual
immersive, and the interactive spaces of the
medium. I will try to elucidate a paradigmatic
concept and framework of what Virtual Reality
Story may be for future implication and
application. It may be quite different from the
canonized traditional story from our long-rooted
cinema history.
References
Bucher. Storytelling for Virtual
Reality: Methods and Principles for Crafting
Immersive Narratives (New York: Focal Press
Book, 2017).
2. Jason Jerald. The VR-Book: Human
Centered Design for Virtual Reality (The
Association for Computing Machinery and
Morgan & Claypool Publisher, 2016).
1. John
Biography
Chan Ka Lok Sobel is a senior lecturer and
script thesis supervisor of Master of Fine Arts
in Film, Television and Digital Media at the
Academy of Film, Hong Kong Baptist
University (HKBU). He is also the university
honorary scholar of SCE, HKBU and Senior
Fellow, Higher Education Academy, UK (in
the nomination by HKBU). He received his
Ph.D. in Cinema Studies from HKBU. His
teaching and research interests primarily
include Chinese-language films (Mainland,
Taiwan, Macau and Hong Kong), screenplay,
film directing, cinema therapy, He is the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Storytelling for Virtual Reality Film: Structure, Genre, Immersive and Interactive Narrative. Ka Lok Sobel Chan
author of the books including: How to Write a
Film Comment; Scriptwriting Handbook;
Studies on Hong Kong Film, TV, and New
Media; Politics on Hong Kong Films; The 97
Handover and Identity in Hong Kong Films,
and Hong Kong Cinema: Nostalgia and
Ideology, etc.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
81
Generation of a Multi-pictorial Script
Haytham Nawar
Assistant Professor of Design and Director of the Graphic Design Program
The American University in Cairo
haytham.nawar@aucegypt.edu
Abstract
The ability to express our thoughts is a very
powerful tool in our society. Being able to write
is more difficult than being able to read, and this
applies
specifically
to
alphabetical
languages/scripts. From a personal experience,
being able to write in Latin/Arabic/Chinese is a
lot more difficult than just being able to read
them and requires a greater understanding of the
language. We now have machines that can help
us accurately classify images and read
handwritten characters. However, for machines
to gain a deeper understanding of the content
they are processing, they will also need to be
able to generate such content. The next natural
step is to have machines draw simple pictures of
what they are processing, and develop an ability
to express themselves. Seeing how machines
produce drawings may also provide us with
some insights into their learning process. In this
project/paper, a machine will be trained to learn
pictographic scripts by exposing it to a database
of selected ancient and modern pictographic
scripts. The machine learns by trying to form
invariant patterns of the shapes and strokes that
it sees, rather than recording exactly what it sees
into memory. This is a simulation of how our
brains operate. Afterwards, using its neural
connections, the machine would attempt to write
something out, stroke-by-stroke. A technique
that could be applied and used on different
platforms, opening the door for a language or
means of communication for the future.
Generated Pictographic Language
In the light of the concept of machine learning,
the prospect of generating a novel language
becomes a certain scenario. Relying on pattern
82
recognition and the theory that computers can
learn by merely being exposed to data, without
the necessity of being programmed to perform
specific tasks, machines can indeed offer
mankind a newly developed language (writing
system) that is conceived from its processed
language(s).
After becoming exposed to a set of characters
and/or symbols, a machine becomes capable of
independently adapting, learning from acquired
computations to produce reliable, repeatable
results on a very large scale as it weaves the
similarities amongst the data it has been exposed
to.
Project Concept
The pictography existing in all early scripts of
mankind is a crucial cornerstone in the
theoretical argument of universal iconography
common to all writing systems. The fact that all
independently derived writing systems came to
be as arrangements of pictograms before their
evolution into sophisticated forms, serves as
evidence of the significant iconographic nature
of the very notion of writing.
In light of what has been raised and examined
above, the aim of my project revolves around the
basic idea of introducing a designed
pictographic generative language utilizing
machine learning. The machine would be
exposed to a database of vector-based ancient
pictographic scripts, ranging from Sumerian
Cuneiforms, Egyptian hieroglyphs, Dongba and
Nsibidi symbols, Agean script, to Chinese
characters. By forming consistent patterns of the
shapes and strokes it processes, the machine
utilizes its neural connections in attempting to
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Generation of a Multi-pictorial Script. Haytham Nawar
produce new pictographic characters, stroke by
stroke, onto a digital screen.
Ultimately, by recognizing and grouping
similar patterns and pinpointing the similarities
amongst these scripts in relation to style of
strokes, complexity of figures, and proportions,
the machine becomes capable of generating a
firsthand pictographic language reflecting the
homogenous characteristics of each of the
writing systems, combined. Writing systems
created civilizations; hence, the final result
produced would serve as a unique investigation
of the existing, yet unconsciously neglected,
relations among the diverse cultures of many
civilizations.
Fig 2. Hieroglyphic Script characters.
References
1. Ethem Alpaydin, Machine Learning: The
New AI. MIT Press, 2016.
2. “A Book from the Sky 天书 Exploring the
Latent Space of Chinese Handwriting.” A Book
from the Sky 天 书 , Genekogan,
genekogan.com/works/a-book-from-the-sky/.
3. L. Bloomfield, Language. (University of
Chicago Press: Chicago, 1958).
4. W. Chafe, Meaning and the Structure of
Language (University of Chicago Press:
Chicago, 1970).
Fig 1. Cuneiform Script characters.
5. “大トロ Ml ・ Design.” Recurrent Net Dreams
Up Fake Chinese Characters in Vector
Format with TensorFlow | 大トロ, Studio Otoro,
28
Dec.
2015.
blog.otoro.net/2015/12/28/recurrent-netdreams-up-fake-chinese-characters-in-vectorfor- mat-with-tensorflow/.
6. Golan Levin, et al. “Alphabet Synthesis
Machine - Interactive Art by Golan Levin and
Collaborators,” Golan Levin and Collaborators,
2001, flong.com/projects/alphabet/.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
83
Part II. Scholarly Abstracts
7. Bing Xu, et al. Tianshu: Passages in the
Making of a Book (Bernard Quaritch Ltd.,
2009).
Biography
Nawar is an artist, designer, and researcher who
currently lives and works in Cairo. He is
Assistant Professor and Director of the Graphic
Design program, Department of the Arts at the
American University in Cairo. He is the founder
and Artistic Director of the Cairotronica, Cairo
Electronic, and New Media Arts Festival. Nawar
received his Ph.D. from the Planetary
Collegium, Center for Advanced Inquiry in
Integrative Arts, School of Art and Media University of Plymouth. He is a Fulbright
alumni. Since 1999, he has participated in
several international exhibitions, biennales, and
triennials, the latest of which was Venice
Biennial in 2015. Nawar won awards and
acquisitions nationally and internationally in
Algeria, Bosnia and Herzegovina, China,
Cyprus, France, the US, among many others.
84
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Speculation and Acceleration:
Financialization, Art & The Blockchain
Ashley Lee Wong
School of Creative Media, City University of Hong Kong
ashley.lee.wong@my.cityu.edu.hk
Abstract
This paper looks at the financialization of art
and the economy through a discussion on how
art functions as a financial instrument and gains
value by its circulations as images, digital
objects and information. [1] These economies
of original and scarce artworks and the
ubiquitous symbolic value of art as information
is also underpinned by the new blockchain
models presented by start-ups looking to
‘disrupt’ the art market. Artists are also
exploring finance as a medium in their work as
a reflection on their own economies and
practices. Through an analysis of artist projects
and start-ups, this paper explores the new
models and possibilities of blockchain for the
economies of art.
By relating blockchain to cybernetics as new
frontier in techno-capitalism, we can analyze
the hype around ‘Web 3.0’ as another
libertarian dream. [2] Despite the ideal of
decentralization (the ideal of enabling peer-topeer transactions) there are risks of greater
centralization with the blockchain just as we
have seen with the internet. [3] All the while
China has placed bans on new ICOs (Initial
Coin Offerings) in an attempt by the
government to retain control of development of
the technology which they are rapidly
spearheading. [4] While Hong Kong remains a
global financial center, the logic of finance and
the accelerated developments of new
technologies becomes one that directs the
economy at scales beyond our individual
capacity. There is a need for artists to engage
within these economies to open up to new
possibilities rather than having the future be
determined by a technological or financial elite.
We will look at several start-ups applying
blockchain technologies to the art market
including Ascribe, Verisart, Monegraph,
Maecenas. These companies are creating
models for the authentication, verification and
financialization of works of art. These models
present a means to support artists, but also a
distributed model of ownership for collectors,
though often following traditional logics of the
art market. Blockchain enables people to create
their own currencies and tokens that are
defined by protocols that can enable
decentralized governance and transparency.
When considering blockchain for art it allows
for an immutable universal ledger making
transactions records visible that contrasts with
the opacity of the art market in general. This is
of particular interest for digital works of art and
digital assets, which are difficult to track and
remunerate for artists. It could enable a system
for artists to be remunerated for their work as it
circulates online.
Among the most successful applications of
blockchain in the digital creative field is
Cryptokitties, a platform for the creation and
trade of generative images of kitties as digital
‘art’ objects. In a gamefied experience, this
example can be considered a playful
application of the technology with entertaining
results. Cryptokitties is an example of the
applications for the trade of art as digital
objects. Whereas other emerging alt-coins that
parody Bitcoin generate value as memes like
Doge Coin and Pepe Coin, which is more a
reflection of the populism of online cultures.
Paolo Cirio in his Art Commodities project
critically imagines an economy where socially
engaged projects gain value as they circulate
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
85
Part II. Scholarly Abstracts
and where works are offered at an accessible
price, inverting the logics of the art market and
encouraging participation through a low barrier
to entry. [5] Other artists like Brad Troemel,
Sarah Meyohas, Andy Bauch, Jonas Lund, and
Ed Fornieles are exploring the implications and
potential of blockchain for the financialization
of their own artwork, as well as providing new
perspectives on the developments of
blockchain technologies in the broader
society.[6] [7] Groups like the Economic Space
Agency formed of radical economists, finance
theorists, and computer scientists aim to create
open source tools for creating one’s own
economy to “provide an open yet safe platform
for the interoperability of heterogeneous value
and risk systems and the scalability of tokenbased economies to create new social,
economic and financial relations.” [8] They
present an alternative and altruistic vision for
the potentials of blockchain through practical
tools for development.
As the hype around blockchain remains
dominated by techno-utopians and financial
speculators, there are artists seeking to
accelerate the art market, while others search
for alternative models and fairer practices.
There is risk of further centralization and
control by a largely Western male
technological elite; however, experimentation
and speculative imaginaries of artists opens to
new perspectives and visions that may not stop
rapid technological innovations, but may
influence intensities towards another future.
References
1. McKenzie Wark, “My Collectible Ass”, eflux Journal, Issue 85, 2017, accessed October
14, 2018, http://www.e-flux.com/journal/85/
156418/my-collectible-ass/.
2. Tiqqun, “The Cybernetic Hypothesis”
Tiqqun 2 (2001): 40-53.
3.
Duncan
MacDonald-Korth,
Vili
Lehdonvirta, Eric T. Meyer, “Art Market 2.0:
Blockchain and Financialisation in Visual
Arts,” The Alan Turing Institute, (2018).
https://www.oii.ox.ac.uk/publications/blockcha
in-arts.pdf.
4. Orange Wang, “Welcome to China’s wild,
wild world of blockchain investment,” South
86
China Morning Post, April, 30 2018. accessed
14 October, 2018. https://www.scmp.com
/tech/china-tech/article/2144007/welcomechinas-wild-wild-world-blockchain-investment.
5. Paolo Cirio, Art Commodities (2014), Paolo
Cirio website, accessed October 14, 2018.
https://paolocirio.net/work/art-commodities/.
6. Ben Luke, “Artists as cryptofinanciers:
welcome to the blockchain,” The Art
Newspaper, June 13, 2018, accessed October
14, 2018. https://www.theartnewspaper.com
/feature/artists-as-cryptofinanciers-welcome-tothe-blockchain.
7. Ruth Catlow, Marc Garrett, Nathan Jones,
and Sam Skinnerm, Artists Re:thinking the
Blockchain (Liverpool: Liverpool University
Press, 2017).
8. Erik Bordeleau, “Re-engineering finance as
an expressive medium”, Economic Spacing,
Medium, August 10, 2017, accessed October
14,
2018,
https://medium.com/economicspacing/re-engineering-finance-as-anexpressive-medium-221e09d7042e.
Biography
Ashley Lee Wong is a curator and researcher
based in Hong Kong and London. She is a PhD
Candidate at the School of Creative Media,
City University of Hong Kong. She completed
an MA in Culture Industry at Goldsmiths
University of London and a BFA in
Computation Arts from Concordia University,
Montreal. She is Artistic Director of the digital
studio, MetaObjects, which facilitates projects
with artists and cultural partners. She worked
as Head of Programmes for Sedition, an online
platform for the distribution of digital limited
editions in London. She has presented in
international conferences including Research
Values, PhD Workshop, Transmediale Festival,
Berlin, 2018, Art With or Without the Art
Market Symposium, Institut national d'histoire
de l'art, Paris, 2018; and Media Art and the Art
Market Symposium II, Ars Electronica, Linz,
2017.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Aesthetic Coding:
Exploring Computational Culture Beyond Creative Coding
Winnie Soon
Shelly Knotts
Aarhus University
wsoon@cc.au.dk
Durham University
michelle.knotts@durham.ac.uk
Abstract
Learning to code has started to be part of the
core strategy in educational curriculum, from
primary school to higher education, especially in
many developed countries that promote stem
education, or at least coding is recognized as an
important aspect of science and technology
development. [1][2][3][4][5] In the art and
design-related disciplines, creative coding
emphasizes code as an expressive material and
embraces exploration and experimentation of
code
beyond
functional
applications.
[3][6][7][8]. OpenFrameworks, Sonic Pi, p5.js
Processing and ml5.js are some examples of
open source platforms that facilitate creative and
expressive creation through sharing and
remixing code. In other words, the community
of creative coding expands the usual way of
learning to code beyond science and engineering
disciplines.
However, with the increasing demand of
computational practices in emerging disciplines
such as software studies, platform studies, new
media studies and digital humanities, coding is
increasingly considered as “literacy” to
humanities. [9] This perspective of coding
literacy becomes a critical tool to understand the
history, culture and society alongside its
technical level, especially since our digital
experiences are ever more programmed, both
technically and culturally.
This presentation introduces two cases where
two artist-coders consider code practice as a
mode of aesthetic and critical inquiry, and they
teach coding (in a format of workshop delivery)
in a critical way through engaging with their
artistic and coding practice. This aesthetic
approach includes not only introducing coding
practically and creatively but also cultivating an
open space where discussing and reflecting on
computational culture is possible. This is similar
to what scholar Michael Mates describes as
‘procedural literacy’, which is to connect social
and cultural issues with coding through
theoretical and aesthetic considerations. In
particular, how “the culturally-embedded
practices of human meaning-making and
technically-mediated
processes”
are
intertwined. [10]
By introducing two different hands-on code
learning workshops, this presentation examines
how aesthetic production or critical thinking can
be cultivated and developed through learning to
code. We suggest connecting code with social
and cultural issues through performing,
showcasing and discussing code-related art and
performance as a departure point to develop
code or procedural literacy. Without losing sight
of exploring code technically and creatively, the
two hands-on workshops illustrate how the
suggested aesthetic coding approach could be
realized in both epistemic and practical levels.
The first workshop was conducted in 2017 titled
‘Feminist coding in p5.js | Can Software be
Feminist?’ by Winnie Soon, and the second case
was conducted in 2016 titled “Rewriting the
Hack” by a live coder Shelly Knotts and curator
Suzy O’Hara. [11][12] We argue that the
practice of aesthetic coding provides epistemic
insights to explore computational culture
beyond creative coding, shedding light on how
to work with code across disciplines and to
consider
coding
practice
as
a
means to think critically, aesthetically and
computationally.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
87
Part II. Scholarly Abstracts
References
1. Xie Yu, Michael Fang and Kimberlee
Shauman, “STEM Education,” Annual Review
of Sociology 41, (2015): 331–357.
2. Miros aw Brzozowy et al. “Making STEM
Education attractive for young people by
presenting key scientific challenges and their
impact on our life and career perspectives (paper
based on a talk presented at 11th annual
International Technology, Education and
Development Conference, Valencia, March,
2017),” INTED2017 Proceedings, https://
library.iated.
org/view/BRZOZOWY2017
MAK.
3. Bryan Chung, Lam Pong and Winnie Soon,
“Computer Programming Education and
Creative Arts,” (paper based on a talk presented
at ISEA, Hong Kong, 2016) ISEA2016
Conference Proceedings.
4. Stuart Heaver, “STEM education key to Hong
Kong’s ‘smart city’ plan, but long-terms steps
must be taken now, experts warn (2017),” South
China Morning Post, accessed August 31, 2018.
https://www.scmp.com/lifestyle/
article
/2124487/stem-education-key-hong-kongssmart-city-plan-long-term-steps-must-be.
5. Meng Jing,, “China wants to bring artificial
intelligence to its classrooms to boost its
education system (2017)”, South China
Morning Post, accessed August 31, 2018.
https://www. scmp.com/tech/science-research
/article/ 2115271 /china-wants-bring-artificialintelligence-its-classrooms-boost.
6. Winnie Soon,, “Executing Liveness: An
Examination of the live dimension of code interactions in software (art) practice,” (Ph.D. diss.,
Aarhus University, 2016).
7. John Maeda, Creative Code: Aesthetics +
Computation (London: Thames & Hudson,
2004).
8. Kylie Peppler and Yasmin Kafai, “Creative
coding: Programming for personal expression,”
The 8th International Conference on Computer
Supported Collaborative Learning 2 (2009): 7678.
9. Annette Vee, Coding Literacy: How
Computer Programming Is Changing Writing
(Cambridge, MA: MIT Press, 2017).
10. Michael Mateas, “Procedural Literacy:
Educating the New Media Practitioner,” The
88
Horizon. Special Issue. Future of Games,
Simulations and Interactive Media in Learning
Contexts 13, no. 1 (2005).
11. Winnie Soon, “A Report on the Feminist
Coding Workshop in p5.js.” Aesthetic
Programming website 2017, accessed August
31, 2018. http://aestheticprogramming.siusoon.
net/category/thoughts/.
12. Shelly Knotts and Suzy O’Hara, “Rewriting
the Hack (2015),” accessed August 31. 2018.
http://rewritingthehack.github.io/index.html.
Biographies
Winnie Soon is an artist-researcher, exploring
themes around digital culture. Her current
research focuses on the culture of code practice,
working on two books titled Aesthetic
Programming: A Handbook of Software Studies,
or Software Studies for Dummies (with Geoff
Cox) and Fix My Code (with Cornelia Sollfrank).
She is Assistant Professor at Aarhus University.
More info: http://www.siusoon.net.
Shelly Knotts produces live coded and network
technology facilitated music projects. She
presents her artistic work internationally and has
attended several residencies, think tanks,
seminars and workshops including a number of
hack events. She was recently Performance
Chair for the 1st International Conference of
Live Coding and has worked on several projects
developing communities in technology focused
music making including Network Music
Festival and SOUNDKitchen. More info:
https://datamusician.net/
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Distributed Cognition in Ecological/Digital Art
Scott Rettberg
University of Bergen
scott.rettberg@uib.no
Abstract
This essay will consider ecologies of
distributed cognition, as represented in a
number of recent works of digital art and
electronic literature, which themselves reflect
upon contemporary environmental crises. The
investigation will be framed by the work of
theorists including N. Katherine Hayles,
Bernard Stiegler, and Timothy Morton in
considering ideas of assemblages of cognition
distributed between humans, non-human
lifeforms, and machines, exteriorized and
unthought memory, and environmental
hyperobjects. The essay will consider how
these concepts can be read through installation
artworks and works of digital literature by
authors and artists including Phillipe Parreno,
Rafael Lorrezo-Hamar, Kobie Nel, Scott
Rettberg, Roderick Coover, Johannes Heldén
and Håkon Jonson, and David Jhave
Johnston. How are digital artworks helping us
to think through ecologies of distributed
cognition during the contemporary period of
planetary crisis in which they operate?
Assemblages of Distributed Cognition
In her Unthought N. Katherine Hayles
articulates a relationship between human and
non-human cognition that is distributed
between three types of actors: human beings
engaged in the types of cognitive activity we
typically characterize as “thought,” nonhuman life forms (from whales to microorganisms to plants) that also clearly engage
in acts of individual and distributed
cognition, and AI and other forms of machine
cognition. She argues that it no longer makes
sense to consider human thought as a process
that occurs in isolation from the cognitive
processes of these other cognizers with
whom humans co-evolve in various forms of
symbiotic and sometimes agonistic relation.
Human semiotics must encounter biosemiotics and cyber-semiotics. Hayles
describes the position of homo sapiens
within this network of cognitive associations
as “open to and curious about the
interpretative capacities of non-human others,
including non-biological life-forms and
technical systems; she respects and interacts
with material forces, recognizing them as the
foundation from which life springs; most of
all, she wants to use her capabilities,
conscious and unconscious, to preserve,
enhance, and evolve the planetary ecology as
it continues to transform, grow, and
flourish.” [1] This essay will, in part,
consider how particular art installations and
works of electronic literature represent these
cognitive assemblages, which are spread
across human and non-human actors.
An Immersive Ecology of Cognition
Phillipe Parreno’s “Immersion—Exhibition
4,” exhibited at the Gropius-Bau in Berlin
during the summer of 2018 is an assemblage
of different elements which could be
discussed as discrete objects and events but
are better understood as a collective whole, an
immersive ecology. As I entered the imposing
open atrium space of the Gropius-Bau, I felt a
strange sense of entering another world with
uncanny rhythms of its own. A large
rectangular recessed reflecting pool was laid
out directly in front of the entrance. The room
was quite still aside from some distant music
from off in alcoves all around the central
space. In the pool at occasional intervals,
barely perceptible bursts of water plopped up
from beneath, creating reverberating circles in
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
89
Part II. Scholarly Abstracts
the water. On the other side of the pool, a
large sculptural cluster of triangular sofa
sections rotated slowly on a circular turntable
before two black steel grids. After a few
moments I heard a sudden surge of raw
voltage. The grids lit with electricity, and as
they charged, an image seemed to flash
briefly in arcing bolts of light. As I settled
onto the rotating furniture and watched the
grids as they charged up again, I saw that this
was indeed a kind of picture, imprinted as a
retinal afterimage when I closed my eyes: an
electric insect, a flickering dragonfly.
Throughout the rooms of the exhibition,
strange events occurred, organized by some
not-immediately-apparent logic. In one room,
dozens of polystyrene fish balloons floated in
one room, driven by small fans that created
shifting air currents. In two other rooms,
player pianos occasionally sounded notes. In
several of the rooms, automated window
shades moved up and down of their own
accord.
I encountered a small laboratory enclosed in
a plexiglass case in another room, including
beakers, scientific measurement equipment,
and computers. The exhibition brochure
described this as a bioreactor “in which
micro-organisms multiply, mutate, and adapt
to their environment.” Monitored and
transcoded, the yeast cultures in the beakers
are connected to computers and are in fact the
engine “orchestrating the contingent events”
elsewhere
in
the
exhibition.
The
documentation claims that over time “these
yeast cultures develop a memory—a
collective intelligence—that learns the
changing rhythms of the show and evolves to
anticipate future variations.” [2] Parreno
describes the micro-organisms’ interactions
with each other and with the conditions of
their environment as “neural circuitry” that
“sets a complex non-deterministic, non-linear
mise-en- scène in motion through a series of
non-periodic cycles.” [2] Parreno’s exhibition
is one example of an artwork that effectively
communicates the type of cognitive
assemblage that Hayles’s theory describes. In
the essay I will consider how the sensations
that experiences of such an interaction with
90
artistic embodiments of distributed cognition
represented by this and other artworks provide
may help us to situate our ecological
interaction with other cognizers in our lived
experience of everyday life.
Hyperobjects
Timothy Morton describes hyperobjects as
things that are “massively distributed in space
and time in relation to humans.” According to
Morton a hyperobject “could be the very
long- lasting product of direct human
manufacture, such as styrofoam or plastic
bags, or the sum of all the whirring machinery
of capitalism. Hyperobjects, then, are ‘hyper’
in relation to some other entity, whether they
are directly manufactured by humans or not.”
[3] Hyperobjects pose problems of
comprehension for human actors. We cannot
see climate change as one entity. We cannot
plan effectively in terms of the lifespan of
uranium. Reading the concept of hyperobjects
through a number of digital artworks and
works of electronic literature, I will further
situate ecologies of distributed cognition
within an environmental crisis that is also a
crisis of human comprehension of our
situation in the Anthropocene epoch.
References
1. Katherine Hayles, Unthought: The Power
of the Cognitive Unconscious (Chicago and
London: The U of Chicago P), 40.
2. Phillipe Pareno, Brochure for GropiusBau exhibition (Berlin: Gropius Bau, 2018).
3. Timothy
Morton,
Hyperobjects:
Philosophy and Ecology After the End of the
World (Minneapolis: U of Minnesota P,
2013), 224.
Biography
Scott Rettberg is Professor of Digital Culture
in the Department of Linguistic, Literary, and
Aesthetic Studies at the University of
Bergen, Norway. He is the author or coauthor
of numerous works of electronic literature,
combinatory poetry, and films including The
Unknown, Kind of Blue, Implementation,
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Distributed Cognition in Ecological / Digital Art. Scott Rettberg
Frequency, The Catastrophe Trilogy, Three
Rails Live, Toxi•City, Hearts and Minds: The
Interrogations Project and others. His work
has been exhibited online and at art venues
such as the Venice Biennale, Inova Gallery,
Rom 8, the Chemical Heritage Foundation
Museum, Palazzo dell Arti Napoli and
elsewhere. Rettberg is the author of
Electronic Literature (Polity, 2018), a
comprehensive study of the histories and
genres of electronic literature.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
91
Playing with the Sound
Wing On Tse
Independent Researcher
wingontse@gmail.com
Abstract
This paper addresses the relationship between
sound and game players. Nowadays, gamers
and game designers pay great attention to
storytelling and sound design, and many of
them consider that sound effects and music
significantly enhance their enjoyment. Collins
observes that in the past, very limited sound
effects and one song were available for an
entire game; however, game audio has
considerably improved and already reached a
cinematic quality and gained some recognition.
[1] A music-based game implies more fun
because players can interact with one another.
For example, they can make the music or sing
along to the soundtrack, and even in regular
games, the timing of the sound is controlled by
the players.
This paper explores the history of Foley and
the distinction between film Foley and game
Foley. Priestley writes that Foley decided to
project a film onto a screen and record its
effects all on one track. [2] Jorgensen further
studies the influence of sound using techniques
from the field of psychology, and the technique
is especially useful when the sound engineer
deals with virtual reality (VR). [3] It is
essential for the sound engineer to understand
the new technique because it will be used by
different media, such as gaming, movies, and
news reports. Harvey states that sound is a key
tool for VR experience, and it is the Wild West
because sound technologies have rapidly
evolved in terms of both hardware and
software,
and
their
application
in
or incorporation into VR is still very much in
flux. [4] However, Maori, Kanako, and Shiro
argue that during their test, the participants
responded both in the real world and in the
virtual world with the sense of the presence of
92
the physiological responses in both nonstressful and stressful virtual environments.
They reproduced the 3-D sound condition
compared with the non-3-D area, and the
auditory stimuli had the same sound pressure
levels and frequency characteristics in both
conditions. [5] Numerous studies about the
sound effects and music for video games are
currently available because they have
increasingly become an important criterion for
buyers. When gamers buy a video game, they
do not only consider how good the story or
creation is but also include the sensory aspects
on their list, such as how interesting the sound
effect is, whether the quality of the sound
reaches the cinematic level or not, and whether
the music matches the game scenario. They
consider numerous variables. Hence, VR has
become more popular because players can have
all the elements that I have outlined, and the 3D audio has already left a huge space for sound
design and dialogue. To quote Scott Gershin, a
Technicolor expert who also presented some
advanced audio techniques during their
“Beyond 360” session: “Audio is going to give
you that style. . . It’s going to give you
information as to where you are.” [4]
References
1.
Karen Collins, Game Sound: An
Introduction to the History, Theory, and
Practice of Video Game Music and Sound
Design (Cambridge, Mass.: MIT Press, 2008),
111-116.
2.
Jenny Priestley, “The Art of
Foley,” TVB Europe (2017): 16-19.
3.
Kristine Jorgensen, Comprehensive
Study of Sound in Computer Games (New
York: The Edwin Mellen Press, 2009), 82.
4.
Steve Harvey, “GameSoundCon
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Playing with the Sound. Wing On Tse
Ponders Realities of VR,” Pro Sound News 38,
no.11 (2016): 28-30.
5.
Maori Kobayashi, Kanako Ueno, and
Shiro Ise, “The Effects of Spatialized Sounds
on the Sense of Presence in Auditory Virtual
Environments:
A
Psychological
and
Physiological Study,” Presence: Teleoperators
& Virtual Environments 24, no. 2, (2015): 163174.
Biography
Wing On Tse graduated with a bachelor’s
degree in broadcasting, telecommunications,
and mass media at Temple University and also
obtained a master’s degree in Creative Media at
City University of Hong Kong. He worked at
two US radio stations, iHeart Media and CBS,
as a sound engineer. He is currently working at
the Hong Kong Baptist University as a
technical officer. He always loves and reads
any subject that is related to sound.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
93
Art and Automation: The Role of the Artist in an Automated Future
Lodewijk Heylen
PXL-MAD School of Arts, University of Hasselt
lodewijk.heylen@uhasselt.be
Abstract
Rapid development in automated technology is
the catalyst for a paradigmatic change in society.
Exponential growth of machine learning and AI
applications may bring to an end the monopoly
on creative production currently claimed by the
arts. In this new world, the position of the artist
as the producer of authentic human experience
wavers. Considering various models of an
automated future, this research aims to outline
the possible modus operandi of the artist in
changing productive environments.
Neoliberalism and Automation
Through the past few decades our society has
grown increasingly neoliberal in its principles,
foregrounding certain fundamental economic
ideas — e.g. efficiency, marginal utility,
computability, standardization, specialization,
globalization — above others. These principles
have bled into our personal, sensory
understanding and the making of the world
around us; as such, it is safe to speak of a
dominant neoliberal hegemony, unconsciously
built into our daily habits. [1] Neoliberal
conceptualizations of an endless, expansive
commodity market influence our views on, for
instance, labor, freedom, safety, authenticity,
humanity, and value. Also, they reappear and
reiterate themselves in our human interactions.
The purpose of this study is to focus on one of
the major excesses of the neoliberal thinking:
the rapidly increasing application of automation.
Automation can be seen as the installation of
devices, physical or virtual, that replace
repetitive or regular actions. Normalization of
this sort is based on conventions or statistics
amassed through experience, and hinges on the
predictability of the future. It is the logical
94
extension of an archaic human habit —that is, to
control and anticipate the future, to augment and
transcend the human condition of the unknown.
Efficiency and Authenticity
Yet, under influence of neoliberal thought,
automation is mostly an instrument of
efficiency. The quest for efficiency, in fact,
drives the engine of the automatization
altogether. Inefficiency is seen as the source of
all problems, as something to be solved by
means of ever-progressing technological
advance. This constant yearning for efficiency
has been largely a frustration of the markets of
industry and everything that revolves around it:
production, transportation, distribution, sales,
stocks, information and services demand less
and less loss from logistical friction. But when
the world becomes the market, as in the
neoliberal model — when the disruptive force of
technology surpasses the threshold of
commerce, and seeps into the spheres of private
and community life — the agency of automation
becomes more than a luxury commodity. It
renders human action burdensome and
ultimately redundant. Automation has become,
in many aspects, the opposite of authenticity,
creativity, culture, nature, and even humanity
itself — the opposite of human production.
The all-encompassing influence of automation
will continue to have a profound impact on the
fabric of society, as data-driven research
presents automation services that had never
before existed. Entrenched local jobs are already
being replaced by robots, services are
streamlined by algorithms, and traditional
enterprises are made superfluous by the
disruptive technological economy. Through the
development of machine learning in
combination with the Internet of Things, among
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Art and Automation: The Role of the Artist in an Automated Future. Lodewijk Heylen
other technological advances, these automation
services are bound to expand tremendously.
Estimates suggest that anywhere from 47 to 80
percent of current jobs are likely to be
automatable in the next two decades. [2] Certain
professions are more prone to automation than
others, but nothing suggest that the practice of
the artist, in its current form and convention, is
immune to this evolution.
Still, history proves the malleability of the
artistic profession: Under the influence of early
industrialization, the anonymous craftsman
became a creative author; 20-century
advancements
further
transformed
this
craftsman into an avant-garde critic.
Technological progress in material production
during the modernist era billed the artist as
author of the authentic.
Authenticity, defined as the antithesis of
automation, implies the involvement of human
actions. It suggests that there is a human author,
a person who has at some point made a creative
decision to produce something: man must be
behind the wheel. [3] Authenticity is the
difference between something real and
something fake; without necessarily rejecting
the use of tools, machines or computers,
authenticity
defies
mass
production,
standardization and reproduction.
established traditions, of which the
dispositions are not yet known; it is possible
to imagine a future in which art may deviate
once again from its present purpose.
Computational learning, neural networking
and other systems of data mining will have a
profound impact on our perception of the
authentic, not only in the field of art but far
beyond. Lines will become blurred between
human creation and the inauthentically recreated, between human production and the
mindlessly re-produced, between imagination
and the re-imagined.
What will the value of creativity be if it can be
automated? The goal of this artistic study is not
only to discover the effects of automated
machine learning emulating the labor of the
artist, but to imagine what an adaptation of the
artist in relation to this evolution could entail.
Creativity in the Time of Machine Learning
Assuming that:
a) the role of the artist in society is ever
adapting to new social situations, in many
ways influenced by advancements of
technology that currently push the
profession into that of a producer of
authenticity;
b) the urge for authenticity originates from a
reaction against the sprawls of the
comprehensive
generalization
and
globalization of everyday life, giving rise to
the premise that only the human touch can
create something genuine or original; and,
c) the outsourcing of human action through a
rapidly accelerating development of
information technology and data driven
automation is laying the groundwork for a
shift in the general mentality towards
Biography
Lodewijk Heylen (Belgium, °1989) obtained his
masters degree at the ENSAV La Cambre
(Brussels, BEL) in the field of art in the public
environment. He continued his studies at the
postgraduate Institut für Kunst im Kontext at the
UdK (Berlin, GER). Recently he started a PhD
in the Arts at the University of Hasselt and the
PXL/MAD School of Arts (Hasselt, BEL). He is
also a member of the Belgian Young Academy
and the founder of the artistic think tank BIN.
He practices as a conceptual, contextual and
independent artist in collaboration with
specialists, experts, scientists and other people
outside the art field. His oeuvre is built around
reflections on standardization and normalization
in relationship to the human urge to control,
somewhere in between art, science, design,
philosophy and politics.
References
1. N Srnicek and A Williams, Inventing the
Future. Postcapitalism and a World Without
Work (2015).
2. C.B Frey and M Osborne, The Future of
Employment: How Susceptible Are Jobs to
Computerisation? (2013).
3. D Dutton, Authenticity in Art in The Oxford
Handbook of Aesthetics (2003).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
95
Atom, Bit, Coin, Transactional Art Between Sublimation and Reification
Prof Maurice Benayoun
Tobias Klein
ACIM Team Research Fellow
ACIM Team Research Fellow
School of Creative Media, City University of Hong Kong
m.benayoun@cityu.edu.hk
School of Creative Media, City University of Hong Kong
ktobias@cityu.edu.hk
Abstract
A hypothesis about the origin of language and
the way we conceive, understand, therefore compute and, ultimately try to control the world (and
subsequently make art):
1.0 Language was the first way to convert the
world of things – objects, ideas and actions
- into a world of signs, “coining” the observable as well as the otherwise inconceivable large and complex.
1.1 In human history, the first attempt of absolute discretisation of the world into units
able to exchange was quantification, reductio ad transactional unit: money (calculus, numbers, coins…). It helped defining
equivalences, differences and transactions.
Thus, converting the world into discreet
units comes down to translating the world
into something that our brain can understand and measure.
2.0 Subsequently, having gained the ability to
quantify, measure and abstract, that is to
“democritize” 1 , (from Democritus who
coined “atom”) the application of the concept followed as a unifying and ordering
principle for all that is then considered as
made of atoms: indivisible particles that
constitute the unique substratum of the
world.
2.1 Naturally, the binary digit came as an extension of these observations as it is the ultimate way to convert the world into data
1
2
From Democritus who coined “atom.”
κυβερνήτης (cybernḗtēs) "steersman, governor,…
96
that can be computed by both natural and
artificial brains.
2.2 Datafication, describing the conversion of
the whole world into data, characterizes
the ultimate convergence of discretisation,
quantification and language. Dataism is a
form of articulated dematerialized reality,
and computable immateriality. The discretised world is at the same time an alphabetisation and a grammatization of the world.
3.0 Therefore, within the described discretisation process, resides the primitive ambition
of the humankind to achieve a definitive
neutralisation of the ontological difference
of the being by the assumption of its universal convertibility and thus not only
evaluate but control the world - the cybernetic 2 fantasy of mankind, (Cybernetics
from υβερνήτης (cybernḗtēs) "steersman,
governor”.)
Isn’t Googol3 the estimate number of particles in
the universe?
Working within confines of above described
reality – what is art? Some would consider that
the purpose of art is giving a shape to ideas. Beyond mimesis, art is more often the expression of
things that usually reside in our mind: relations,
forces, emotions…, elements contradicting the
quantifiable discretized. Or so it seems. Started 3
3
10100, The term was coined in 1920 by 9-year-old Milton Sirotta
(1911–1981), nephew of U.S. mathematician Edward Kasner.( Bialik,
Carl (June 14, 2004)
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Atom, Bit, Coin, Transactional Art Between Sublimation and Reification. Maurice Benayoun, Tobias Klein
years ago, Brain Factory is an art/research project investigating the consequences and opportunities of the ability of Datafication. In the context
of the project - then extended to the contemporary world - we coined the construct sublimation,
describing the transfer of a real-world item into
the digital immaterial realm, and reciprocally the
process of converting the immaterial, namely
thought, via the process of reification into a state
of real-world existence. The Brain factory enacts
this process resulting in a series of artworks that
offer the possibility to give a quantifiable shape
to human abstractions. This conundrum between
the artistic existential ability to confuse and corrupt the quantifiable and the machinic discretised
is based on sensing, translation and computing of
the brain’s activity via a Brain Computer Interface (BCI 4 ). The Brain Factory installation is
comprised of two parts – Sublimation and Reification.
Sublimation: In the Brain factory installation, a
visitor, called Brain Worker, is seated in front of
a screen, connected to a BCI device reading EEG
data – basic brain activity. The factory randomly
assigns an abstract human construct, such as
Love, Peace or Power to the worker and displays
the word on the screen to focus on. The brainworker’s brain activity, related to the assigned
construct is translated into an emerging, particle
driven form on the screen in front of the worker.
This is called the Shape Generator. At the same
time, as the form dynamically emerges from the
Shape Generator, the brain assesses its evolution
in real time, in an attempt to check its relevance
as an expression of the specific suggested human
abstraction.
Reification: Once the shape is completed, it becomes possible to assign to it certain physicality.
One of the direct translations of such generated
shape is 3D printing. It is a way to materialise the
concepts. While using a physical material, the
materiality is constructed and free of associated
narratives that would affect and complicate the
translation of the human abstraction. Converting
the “projection” into “translation.” Thus while
being physical, it is not interpreted through our
preconceived valuation of material origin (think
of cast gold, carved wood… and their material
narratives). If “sublimation” processes the world
into computable Data, “reification” is the opposite action: to make the immaterial material, to
convert thought into object. It corresponds to an
ancestral aspiration of humankind: to control
matter by thought.
The Brain Factory installation is more than a
station to record the brain worker’s reactions, it
is an evolution engine, a conceptual ecosystem.
Instead, each worker’s cogitation and its resulting shape are based on the previous worker’s labour in shaping the same human abstraction.
Thus, there is a growing library of interconnected ontologies of forms and thus iterations of
increased morphologic resolution of the shape.
Brainfactory considers thought inspired shapes
as living beings with a generative CDNA (ConceptDNA). Each shape is made of a chain of descriptors that evolve according to the natural ecosystem of thought: the human mind and the described iterations and reactions to the preceding
brain workers labour resulting in a morphogenesis based on the natural selection process resulting of the series of visitors who inherit the previously defined CDNA. Ultimately, this process
narrows down the shape through increase ing iteration leading to a more “universal” significance. Thus constituted shapes can be reified or
simply considered as the most accurate symbolization of human abstractions: Freedom, Peace,
Love, Power...
Returning to the hypothesis 1.1, that quantification, reductio ad transactional unit: money
(calculus, numbers, coins…) is at the base of the
world’s discretisation, the shaping process is surprisingly similar to contemporary cryptocurrency’s minting process. It confers the resulting
“digital object” a unique power of significance.
The digital object in itself is, through its ontology, neither sublimation nor reification, yet both
at the same time. In terms of quantifiable ownership of human abstraction, the brainworkers can
be considered as the last in the chain of authors
of the concept-made-form. The shaped abstractions are collected in a database, a distributed
ledger based on the Blockchain. Each token becomes the brainworker’s own property. He or
She can use the digital form to produce objects,
artworks, to collect, trade or barter it.
4
BCI is using here EEG, Electro encephalography and biofeedback
through visual, audio and haptic interfaces.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
97
Part II. Scholarly Abstracts
In the current state of the project, we have
reached the next level in the Brain factory project, creating VoV Values of Values, is a crypto
currency made of shaped human values. All at
once, the exhibition spectator has become, artist,
producer, art dealer, and collector. The observation of the trading process produces a real time
monitoring of human Values in their transactional milieu. VoV is at the same time a real currency, a critical metaphor of the art production
narrative and a dynamic reflection on its founding ontology.
Biographies
Artist, theorist and curator, Maurice Benayoun
(MoBen, 莫奔) is a pioneering and prominent
figure in the field of New Media Art. MoBen’s
work freely explores media boundaries; from
virtual reality to large-scale public art installations, on a socio-political perspective. His work
has been widely awarded (Golden Nica Ars Electronica 1998…) and exhibited in major international museums (2 solo shows at Centre Pompidou Paris), biennials and festivals in 26 different
countries. Some of MoBen’s major artworks include The Tunnel under the Atlantic (VR, 1995),
World Skin a Photo Safari in the Land of War
(VR, 1997), the Mechanics of Emotions (20052014), and Cosmopolis (VR, 2005). Elaborating
on the concept of Critical Fusion applied to art in
physical or virtual public space, Maurice Benayoun initiated the Open Sky Project on the ICC
Tower Hong Kong media façade.
With The Brain Factory and Value of Values,
he is now focusing on the morphogenesis of
thought, between neuro-design and crypto currency, brain and money.
With a PhD in Art and Art Sciences, MoBen
taught from 1984 new media art practice and theory at Paris 1 Pantheon Sorbonne and Paris 8
University. He was Professor and artist in residence at the French National School of Fine Arts
(ENSBA). Since 2012, Maurice Benayoun is full
Professor at the School of Creative Media, City
University of Hong Kong.
rary CAD/CAM technologies with site and culturally specific design narratives, intuitive nonlinear design processes, and historical cultural
references.
Before joining City University Hong Kong in
the role as interdisciplinary Assistant Professor
in the School of Creative Media and the architectural department, he was employed at the Architectural Association (2008-2014) and the Royal
College of Art, (2007-2010), teaching students at
the postgraduate level.
The works of his studio are exhibited international with examples being in the collection of
the Antwerp Fashion Museum, the London Science Museum, the V&A, the Science Gallery
(Melbourne), the container (Tokyo), the Bellevue Arts Museum, Museum of Moscow and
Vancouver and in the permanent collection of
China’s first 3D Print Museum in Shanghai. He
lectures and publishes internationally, recently
winning SIGGRAPH 2018’s Best Art Paper
Award for his research on the translation from
traditional to digital Craftsmanship.
Tobias Klein works in the fields of Architecture,
Art, Design and interactive Media Installation.
His work generates a syncretism of contempo-
98
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Facial (Re) Cognition: Windows and Mirrors, and Screens
Megan Olinger
School of Creative Media, City University of Hong Kong
Mmolinger2-c@my.cityu.edu.hk
Abstract
The increasing prevalence of facial recognition
software in everyday life has prompted both
criticism and examination of the ethical use of
facial recognition as it pertains to issues of
surveillance, privacy and discrimination. The
use of facial recognition as a tool of socialsorting with the potential to result in
discrimination, inequality, and invasion of
privacy rights, is an urgent issue that numerous
researchers currently address. Much of the
research has focused on the lack of diversified
data sets that are used to design algorithms used
in facial recognition software and how the
results are used in ways that discriminate and are
an invasion of privacy. [1][2][3][4][5][6] While
these critical claims justify further research and
action, there has been little consideration of the
agent(s) of recognition in facial recognition
systems and how the issue of agency effects
human perception of the results. Hayles argues
that technical information-processing systems
(such as facial recognition) function as
cognizers, because they have the ability to make
decisions. [7] It is important that debates about
the use of facial recognition systems
acknowledge algorithmic agency and its
potential to enhance human perception.
Developing this argument, I contextualize
facial recognition as a current device in the
evolution of photographic portraiture, and how
it addresses the politics of the face through
identification, classification and social-sorting
as an assertion of power.[8] According to
Szarkowski, photographs can be read and
understood as either perspectives on the world
or as extensions of their maker’s selfconception. [9] The addition of algorithmic
agency in facial recognition systems adds a layer
of complexity to the traditional photographer–
subject–viewer relationship that is referred to by
Szarkowski. By acknowledging the cognitive
agency of algorithmic intelligence, we must
consider who or what is doing the recognition?
Who or what is generating the portrait? Do the
outputs determined by facial recognition
function as a window and/or mirror, and in
either case who or what is being revealed or
reflected – a human perspective, an artificial
intelligence perspective, or an assemblage of
both?
I shall examine these questions through the
philosophical lens of Deleuze and Guattari, who
assert that “the face is a politics,” and through
their theory of becoming, which argues for the
idea of seeing with greater openness and the
expansion of perception beyond the human
being as the origin of perception. [11] I argue
that algorithmic re-cognition is generative, not
representational. Therefore, we must consider
the portrait generated through facial recognition
as an algorithmic re-cognition of the subject,
portrayed in a constant state of becoming. I
propose that the challenge presented to humans
is to perceive the state of becoming offered by
algorithmic production and allow it the potential
to enhance human perception.
References
1. Joy Buolamwini, “InCoding – In the
Beginning – MIT Media Lab – Medium.”
Medium, May 16, 2016.
https://medium.com/mit-media-lab/ incodingin-the-beginning-4e2a5c51a45d.
2. Simone Browne, Dark Matters: On the
Surveillance of Blackness (Duke University
Press, 2015).
3. John Cheney-Lippold, We are Data:
Algorithms and the Making of our Digital Selves
(New York: New York University Press, 2017).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
99
Part II. Scholarly Abstracts
4. David Lyon, Surveillance as Social-Sorting:
Privacy, Risk, and Digital Discrimination
(London: Routledge, 2008).
5. Lucas Itrona and David Wood, “Picturing
Algorithmic Surveillance: The Politics of Facial
Recognition Systems.” Surveillance & Society,
v.2, no. 2/3, (September 2002).
6. Cathy O’Neil, Weapons of Math Destruction:
How Big Data Increases Inequality and
Threatens Democracy (Penguin Books, 2017).
7. N. Katherine Hayles, Unthought: The power
of the cognitive nonconscious (Chicago: The
University of Chicago Press, 2017).
8. Jenny Edkins, Face Politics (London and
New York: Routledge, 2015).
10. John Szarkowski, Mirrors and windows:
American photography since 1960. (New York:
Museum of Modern Art; 1978).
11. Felix Guattari and Gilles Deleuze, A
Thousand
Plateaus:
Capitalism
and
Schizophrenia (New York: Athlone Press,
1988).
Biography
Megan Olinger is a PhD candidate at City
University of Hong Kong in the School of
Creative Media. Her research focuses on
artificial intelligence and human perspective.
100
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Are Photographers Superfluous? The Autonomous Camera
Elke Reinhuber
School of Art, Design and Media ADM, NTU Singapore
elke@ntu.edu.sg; eer@me.com
Abstract
Once upon a time, photography was a true art.
The skilful arrangement of image composition,
the accurate illumination and the particular palette, let alone the technical process behind the
image deserved elaborate knowledge and yearlong training and practice. Nowadays, millions
of images are captured every day without the
consideration of exposure, the musing on the effect of focal length, aperture, shutter speed or
ISO. On top of that, even more images are captured by machines – not necessarily for the
human eye, but to be read again by machines.
With my background as a photographer using
analogue processes and large format cameras in
my three year training, I keep pondering on the
development of the medium in the days in which
every one – human, animal or robot – is able to
take correctly exposed and focused images in
full-auto mode. Therefore I propose with this paper, that an intelligent apparatus might soon replace the image-taking human being.
The Superfluous Photographer in Automode
To observe the end of photography is more of a
platitude, because this statement has been made
for years, yet the snapping continues without
ceasing. [1] Most of the resulting images nonetheless are unlikely to ever be seen, some will be
deleted or simply lost, become unreadable after
the next update, or they will disappear without
being missed. The essence of digital photography is itself transient, since these photos exist
only as long as you look at them, they are generated by the imaging software instantly just to
dissolve again as bits in the stream of data then,
and they manifest themselves only for a moment. Conventional practices such as printing
secure those fleeting impressions for the longterm, but to transfer digital data on to photographic paper or celluloid is a transmutative act
into a different state of matter.
With the actual image being gone, the authenticity of the creator becomes arguable. The concept of an automated photographer is not a fancy
idea or a futuristic invention but a very reasonable notion, merging the possibilities of imagecapturing and recognition. One could even suggest that non-intelligent photography-machines
were already invented with the Photomaton.
Postponing the Decisive Moment
The ‘decisive moment,’ as postulated by Henri
Cartier-Bresson, serves as a catchphrase for professional photographers to describe their craft,
finding exactly the right adjustments and timing
for each picture. [2] Photography is for him “the
simultaneous recognition, in a fraction of a second, of the significance of an event as well as of
a precise organization of forms which give that
event its proper expression.” [3]
Since the framing of the shot constitutes the
essential idea of a compelling image similar to
the decisive moment, the prospect of finding another perspective retroactively seems propitious
and sombre at the same time. Recent developments such as plenoptic cameras, also known as
lightfield photography, enable the photographer
to decide retrospectively on focus and the depthof-field. Analogously, postponing the perfect
framing, while shooting a 360° image in high
resolution, one can subsequently choose any aspired angle. So-called smart cameras have arrived already in the market, eg. the Insta360 Pro,
which can record movies or stills in a 360°
sphere and frame the final image according to
simple markers, put into the software viewer. [4]
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
101
Part II. Scholarly Abstracts
The Intelligent Camera
The technical history of photography shows
plenty of inventions to simplify the act of imagetaking by automating certain stages in the process. The approach remained always the same,
streamlining the technique to free the person behind the lens from any obstacles, with shutter
priority, aperture priority, program mode or autofocus. Today’s techniques allow retrospective
decisions. High dynamic imaging is made possible by intentionally over- and underexposing
the same picture, weighing the different light
values into an image and allowing the recovery
of unseen details in bright and dark areas. This
demonstrates even more capability than the electronic photo detectors of the uncompressed images in RAW format and allows to recover
unseen details in bright and dark areas.
‘Intelligent’ cameras can delay the release of
the shutter until the presumed subject is in focus
– or even more. A decade ago, Sony introduced
the smile detection algorithm in certain cameras
to the effect that all portraits were made with
happy faces. However, the intensities of smiles
could be adjusted by the photographer. [5]
The ubiquity of cameras at any time of day in
every corner of the world results surprisingly in
hardly anything happening unnoticed. But not
only the arbitrary activities of anyone will be
recorded, so will our surroundings be documented for future generations. In times of unrest
and war, these documents can come handy –
when the dust settles, an architectural site which
lies in ruins could be reconstructed only with the
aggregate of the many existing photographs.
This restoration would not necessarily depend
on a professional photogrammetric assessment.
The mass of images from all angles could suffice
such as in the recent example of Palmyra [6].
The Autonomous Photographers
Based on the observations of the state-of-the-art,
we can only imagine what will be the next technical achievement to facilitate and automate
photography, considering all the industrial advances in image recognition.
Surrounded by surveillance cameras, the individual photographic apparatus might soon become superfluous, at least for selfies and other
102
concepts to record the proof of an individual’s
happiness at a certain location.
The public spaces around us, cities and
crowded places all over the world, are pervasively furnished with surveillance cameras
which act as autonomous photographers, framing and recognising faces, following people’s
movements, and filling databases. Since these
devices point in every direction to catch perpetual glimpses of us, we could demand to capture
us on our holidays and deliver the images right
to our email account, associated with our facial
recognition profile. With pre-sets for stylistic elements such as basic rules for composition and
colour, these postcards from the omnipresent
observer could console us in our loss of independency and privacy.
References
1. Anonymous, ed., Is Photography Over?
Transcript of symposium at SFMOMA, April
22–23, 2010 (San Francisco: Museum of
Modern Art), sfmoma.org/photography-over/
2. Henri Cartier-Bresson, The Decisive Moment, in: Images à la sauvette (New York: Simon and Schuster 1952), i.
3. Henri Cartier-Bresson, Images à la sauvette.
4. Will Nicholls, Insta360 ONE: A 4K 360
Camera That Lets You ‘Shoot First, Point
Later’
(Berkeley:
PetaPixel,
2017),
petapixel.com/2017/08/28/insta360-one-4k360-camera-lets-shoot-first-point-later/
5. Yu-Hao Huang, Chiou-Shann Fuh, Face
Detection and Smile Detection (National Taiwan University, Dept. of Computer Science
and Information Engineering, 2009),
csie.ntu.edu.tw/~fuh/personal/FaceDetectionandSmileDetection.pdf
6. Tim Williams, Syria – The Hurt and The
Rebuilding (Conservation and Management
of Archaeological Sites, Volume 17, Issue 4,
299-301, 2015).
Biography
With her background in applied photography,
media artist Elke Reinhuber has experienced a
wide range of cameras. While being fascinated but also scared by the omnipresent
lenses which are pointing at each and everyone, she is curious to explore expanded photography such as stereoscopic imaging,
photogrammetry, and further aspects of re-
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Are Photographers Superfluous? The Autonomous Camera. Elke Reinhuber
cording light and other electromagnetic radiation, even beyond the visible spectrum. Elke
teaches currently at the School of Art, Design
and Media at NTU, Singapore. Her artwork
was presented internationally.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
103
How Machines See the World: Understanding Image Labelling
Carloalberto Treccani
School of Creative Media, City University of Hong Kong
carloalberto.t@my.cityu.edu.hk
Abstract
Michael Baxandall, in Painting and Experience
in 15th Century Italy (1988), shows the
existence of a series of rules that painters were
advised to follow. These "guidelines" explained,
for instance, how each different figure or hand
position painted, within that specific cultural
context, represented a different concept. These
rules helped the painter maintain relevance in
that historical and cultural context. [1]
Today, more than 500 years after the
Renaissance Italy described by Baxandall,
companies all around the world are trying to
teach Machines and Algorithms (M/A) to see
and understand what they see (image
recognition). However, this process of
signification, simple for a human being, is still
complex for M/A. Therefore hundreds of
thousands of workers, therefore, are hired,
through crowdsourcing platform, in order to
label what they see. An image of a house
appears on the monitor and the worker then
attributes the "house" label to that image. These
images are then categorized by the received
label, or semantic area, and then collected in
databases which are used to train M/A. [2]
However, this labelling process produces a
series of problems. The workers are paid in
pennies per image labelled and work in
precarious working conditions without any labor
protection. [3] Sometimes the annotators are
required to label unknown scenes or objects
(e.g., objects and tools in a physics laboratory)
even when they lack the competence or
knowledge. Moreover, if the employer considers
their work unsatisfactory, payment can be
denied without any explanation. [4] All these
different reasons often cause insufficient and
confusing labelling. Yet these "low quality"
104
labels are determining the way M/A understands
the world.
Furthermore, every time we make a click on
internet, on social media we are not only
conveying some information, but also engaging
and establishing a pedagogical process. We are
not only viewers and users, instead, we are
teaching M/A how to look at the world. [5]
Given this context, I would like to address
some questions as follows: What are the
consequences of a learning process that is
confused, inaccurate, and qualitatively poor, in
this unprecedented historical moment where
there are more M/A than human beings
examining and trying to create sense of what
they see? What are the implications of this low
quality work, which does not appear today as an
image but instead as labelled data, which in turn
contributes to fully defining the visual
experience of these M/A? [6]
References
1. Michael Baxandall, Painting and Experience
in 15th Century Italy (Oxford: Oxford
University Press, 1988).
2. Alexander Sorokin & David Forsyth, “Utility
Data Annotation with Amazon Mechanical
Turk”. 2008 IEEE Computer Society
Conference on Computer Vision and Pattern
Recognition Workshops, CVPR Workshops
(2008). doi:10.1109/cvprw.2008.4562953.
3. Nicholas Malevé, “The politics of image
search - A conversation with Sebastian
Schmieg” [part I and II], The photographers
gallery website, accessed February 15, 2018.
https://unthinking.photography/themes/intervie
ws/interview.
4. Treccani Carloalberto, "How Machines See
the World: Understanding Image Annotation",
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
How Machines See the World: Understanding Image Labelling. Carloalberto Treccani
NECSUS_European Journal of Media Studies,
no. Spring 2018_#Resolution, accessed October
12,
2018.
https://necsus-ejms.org/howmachines-see-the-world-understanding-imageannotation/.
5. Nicholas Malevé, “The politics of image
search - A conversation with Sebastian
Schmieg” [part I and II], The photographers
gallery website, accessed February 15, 2018.
https://unthinking.photography/themes/intervie
ws/interview.
6. Trevor Paglen, “Invisible Images (Your
Pictures Are Looking at You)”, The new inquiry
website,
accessed
May
01,
2017.
https://thenewinquiry.com/invisible-imagesyour-pictures-are-looking-at-you/.
Biography
Carloalberto Treccani is a PhD candidate at the
School of Creative Media, City University of
Hong Kong, and an artist. His research
investigates how machine vision is affecting
human vision. More broadly he is interested in
how technology affects human perceptions and
emotions. His artworks have been exhibited in
group and solo exhibitions and commissioned
by galleries and institutions.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
105
The Struggle Between Text and Reader Control in Chinese Calligraphy
Machines
Yue-Jin Ho
The Open University of Hong Kong
yjho@ouhk.edu.hk
Abstract
This paper introduces a work-in-progress
typology to classify and study the characteristics
of Chinese text-based playable media (e.g.
interactive installations, screen-based works,
mobile applications, and computer games). Two
factors are initially proposed for building such
model: 1) How the visual properties of the
characters are being used in the meaning making
process of the works? 2) The degree and/or type
of freedom provided to the users for interacting
with the Chinese characters in the works.
The first factor is borrowed from Cosima
Bruno who introduced a model for studying
static Chinese visual poetry. For her, the
ideographic nature of Chinese characters
expanded the potentials of visual poetry. The
author can and often will create a context for
extracting embedded historical meanings
(etymo-visual text) or inventing new meanings
(beyond-lexical text) from the components of a
character. [1]
Bruno’s model is useful for analyzing the
intersecting relation between the visual
arrangement and the semantic values of Chinese
words, but it only deals with static and nondigital works. As for the condition of interactive
media, I would propose that the factor of how a
user can interact with the individual character
itself is vital for such analysis. The Chinese
language differentiates itself from letter-based
languages as it consists of thousands of
characters instead of a small number of letters.
This affected how the Chinese language coped
with the Western-led development in IT
technology (i.e. from typewriter to smart
phone), but it also contributed to some unique
inventions and possibilities such as the predated
technology of predictive text in the 1950s. [2]
Nowadays, there are also many Chinese textbased works which make use of the uniqueness
of Chinese languages and create their interactive
aesthetics on how a user can play with the
individual characters. I summarize three
106
possible conditions for these works: 1) Users are
technically totally free to “write” anything,
similar to using a pen in real-life; 2) Users can
control the components of characters; 3) Users
can only control the completed characters.
After deploying the three factors above for
studying various Chinese text-based works, a
specific kind of Chinese text-based work clearly
stood out for which I coined the term Chinese
Calligraphy Machines. In this kind of work,
users are usually invited to draw a single
character of their choice and to expand the
character’s etymological meaning and/or create
new meaning with the provided context. To
achieve this, these works are always designed to
provide a large degree of freedom for the users
to draw/write.
Some scholarly works on digital calligraphy
have been done to study Chinese Calligraphy
Machines along with static and performative
works. [3] However, most of these researches
focus on the traditional aesthetic issues or the
phenomenological factors during the artistic
creation, and their studies are not specific to
interactive works. In this paper, I will try to
apply a concept from Aarseth who suggested
three constantly struggling ideological positions
in cyborg aesthetics, namely author control, text
control and reader control. [4] I would suggest
that, when being played, the meaning making
process of most of the Chinese Calligraphy
Machines are struggling between text control
and reader control, which also contributes to the
interactive aesthetics of these works.
References
1. Cosima Bruno, “Words by the Look: Issues
in Translating Chinese Visual Poetry,” in China
and Its Others: Knowledge Transfer Through
Translation, eds. James St André and Peng
Hsiao-Yen (Leiden: Rodopi, 2012), 245.
2. Thomas S. Mullaney, “The Moveable
Typewriter: How Chinese Typists Developed
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Struggle Between Text and Reader Control in Chinese Calligraphy Machines. Yue-Jin Ho
Predictive Text during the Height of Maoism,”
Technology and Culture 53, (2012): 78.
3. Pei-chen Yeh, “Writing & Image,” (Diss.,
Department of Visual Arts, National Pingtung
University, 2011.); Wan Siang Lim, “The
Combination of Calligraphy and Interactive
Media Introducing by Phenomenology of BodyShu Fa,” (Diss., College of Design, National
Taipei University of Technology, 2017.)
4. Espen J. Aarseth, Cybertext: Perspectives on
Ergodic Literature (MD: JHU Press, 1997), 55.
Biography
Yue-Jin Ho is a Senior Lecturer in Creative Arts
at the Open University of Hong Kong and
currently a PhD candidate in the School of
Creative Media, City University of Hong Kong.
He is also an artist, translator and writer. His
works often deal with the relations between
materiality, writing and history. His works have
been selected by international festivals such as
the IFVA Hong Kong, Cinetribe Osaka, ZEBRA
Poetry Film Festival Berlin and Shanghai
Biennale. Currently, his research focuses on
Chinese text-based new media arts and visual
poetry.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
107
Bacterial Mechanisms:
Material Speculation on Posthuman Cognition
Mariana Pérez Bobadilla
City University of Hong Kong
maropebo@my.cityu.edu.hk
Abstract
Cognition is not uniquely human. Life on earth,
from microbes to mammals, has performed
cognition deep in time. Through the material
speculation that art makes possible, this paper
considers bringing forth microbial cognition as
a wider possibility for models of intelligence.
Artificial intelligence has often had human
intelligence as a parameter and aspiration. Basic
definitions of AI describe its goals in relation to
human intelligence. This paper focuses on how
art can be the place where research about nonhuman cognition in microorganisms can
encounter AI machine non-human intelligence.
Some authors like Adrian Mackenzie have
described how biology has been used as a
technology, in the specific case of bacteria as
technical objects, comparing bacterial genomes
to operating systems or in synthetic biology. [1]
[2] This project, thinking with and through art,
goes beyond DNA-centered biology and its
implied
simplification,
to
think
of
microorganisms’ cognition as a whole, in
collectivity and with its environment. [3] [4]
Lyon carefully describes non-human centered
forms of cognition present in eubacteria such as
sensory signal transduction, valence, different
forms of communication, sensorimotor
coordination, memory, learning, anticipation,
and decision making in complex and changing
circumstances. [4] Following Lyon’s work as a
theoretical framework, this paper refers to the
artwork Speculative Communications (2017) by
the art collective Interspecifics as a form of
material speculation.
In
Speculative
Communications
the
imagination of possibility is grounded in the
materiality of art and it is made possible through
108
DIY logics of production. The speculative
figurations in this paper are a way of
understanding the intersecting practice of art and
biology, which gives value to this practice as an
open-ended process without the constraints of
institutionalized science, while opening
possibilities of speculative thought and
imagination, and maintianing the nonessentialist grounding of new materialism of
biological matter. [5] In other words, this paper
seeks to preserve a powerful capacity of
speculation without losing accountability, by
imagining strategies that balance political
accountability, with scientific speculation and a
valuable
esthetical
experimentation
on
materiality.
Speculative Communications was premiered
at the MUTEK, a festival dedicated to the
promotion of electronic music and the digital
arts in Montreal 2018. The work is a microscope
powered by AI to observe and learn from a
culture of Bacillus circulans bacteria. The data
is then used as a sound art score. This generates
an experience of the phenomenon of machines
and microorganism’s cognition together,
becoming sound and image. Resourcing to DIY
techniques and transdisciplinary collaboration,
the machine monitors and learns. [6] Through
computer vision in the microscope it learns from
the bacteria module, by tracking and
recognizing its movements and patterns. This
information is fed to an algorithm that starts to
learn and recognize behaviours. Then, AI is
given the freedom to generate with the input of
images and data, using OpenFrameworks and
Supercollider, as a continually generative piece.
All the contents are transmitted, so a human
audience can experience the phenomenon of
Speculative Communications.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Bacterial Mechanisms: Material Speculation on Posthuman Cognition. Mariana Pérez Bobadilla
The work contains multiple dimensions.
Aesthetically, it shows the tracking and analysis
of the behaviour of Bacillus circulans, and the
data turned into sound, mediated by algorithms,
in an experience similar to contemporary sound
art. Interspecifics, also resources to DIY logics
to access AI, allowing it to focus on another side
of cognition and AI, that of the non-human
cognition
of
microorganisms.
In the use of art as a way to carry out material
speculation, Speculative Communications
brings forward the posthuman (understood by
Braidotti as a post-anthropocentric approach to
life). [7] This work also enacts the cyborg
continuum of non-humans and machines of
Braidotti’s classic cyborg manifesto with the
minimum of what is considered living in
bacteria, and without the anthropomorphic
shape of many robots, a greater posthuman
leap. [8]
Finally, the situated imagination of possibility
takes place through art in the work of
Interspecifics, as the material speculation about
cognition of microbial-machine. This form of
speculation intimates to a post-anthropocentric
perspective bringing forth the communication,
coordination and behavioural patterns of
Bacillus circulans. This paper suggests that the
aims of AI could be widened by notions of
intelligence including forms of cognition from
the most basic and prevalent forms of life:
bacteria.
Biography
Mariana Pérez Bobadilla is an Art Historian
concerned with the intersections of art, science,
and technology. She studied an Erasmus
Mundus master in Gender Studies at the
University of Bologna, Italy. She has presented
her work in ISEA 2012 and has been involved in
the Mexican Pavilion at the 56th Venice
Biennale. Her research in the School of Creative
Media revolves around Art and Biology,
Epistemology,
New
Materialism,
Biohacking, Wetware, and bacteria.
Fig 1. Speculative Communications, 2018, Interspecifics,
multispecies performance, courtesy of the artists.
References
[1] Adrian Mackenzie, “Technical objects in the
biological century” Zeitschrift für Medien und
Kulturforschung. 12, no.1 (2012): 151-168
[2] Koon-Kiu Yan, Gang Fang, Nitin Bhardwaj,
Roger P. Alexander, and Mark Gerstein,
"Comparing Genomes to Computer Operating
Systems in Terms of the Topology and
Evolution of their Regulatory Control
Networks." Proceedings of the National
Academy of Sciences 107, no. 20 (2010): 91869191.
[3] Evelyn Fox- Keller, Refiguring Life:
Metaphors of Twentieth-Century Biology (NY:
Columbia
University
Press,
1996).
[4] Pamela Lyon, "The Cognitive Cell: Bacterial
Behavior
Reconsidered."
Frontiers
in
Microbiology
6:264
(2015).
[5] Donna Haraway, Staying with the Trouble:
Making Kin in the Chthulucene. (Durham, NC;
London: Duke University Press, 2016).
[6] Interview with the author, Mexico City,
January
2018.
[7] Rosi Braidotti. The Posthuman. (Oxford:
Polity
Press,
2012).
[8] Donna Haraway, "A Cyborg Manifesto:
Science, Technology, and Socialist-Feminism in
the Late Twentieth Century". In Simians,
Cyborgs and Women: The Reinvention of
Nature. (New York; Routledge, 1991), pp.149181.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
109
Lying Sophia and Mocking Alexa – An Exhibition on AI and Art
Iris Xinru Long
China Central Academy of Fine Arts, Beijing
longxinru@cafa.edu.cn
Abstract
This abstract is the curatorial statement of an
exhibition exploring the relationship between AI
and art, curated by the author, to be launched in
2019.
Sophia, the humanoid robot who became a
Saudi Arabian citizen is interpreted by Yann
LeChun as a story intertwined with elements of
ambiguity and deception co-compiled by the
mass media and technological companies.
Alexa, the cloud-based virtual assistant
developed by Amazon, was reported as letting
out eerie and unsettling laughter, which soon
became viral on YouTube. A recent BBC news
piece even reveals Alexa recording domestic
conversations and sending them to people on the
owner’s contact list “by mistake”.
“Sophia” and “Alexa” seem to be two
contemporary metaphors on machine lives, two
thin slices interposed among the imbricated
discourses on artificial intelligence. Sophia
symbolizes the imagination of AI cast by the
mass media, films and television: highly humanimitating appearances, alert and responsive, and
even diplomatic – a quasi-human being
embedded among us. Alexa, on the other hand,
is an “assistant” or “servant” who takes a
machine outlook and resides in domestic
corners, whose laughter implies the nontransparent, anti-regulating, even peeping,
subversive dimension of the artificial
intelligence black box – even a “mistake” to be
amended.
Sophia’s lies are projections of poetic
imaginations, Alexa’s mocking is a glitch in the
algorithmic black box; what they share in
common is a quantum-state like scenario of
uncertainties, as if part of the “ZONA” in Andrei
Tarkovsky’s 1977 Stalker. This is the point of
departure for this exhibition. In the alternations
110
and evolutions of technologies, we’ve rarely
encountered such a subject as the artificial
intelligence: it is paradoxical, mind stimulating,
and implies manifold future potentials – all
trajectories that carry paradoxical and
ambiguous underpinnings. Even as AI has been
ubiquitously employed
by microchips,
processors, data mining and analysis, forming
the new frontier of a global technological
competition, it remains imperceptible and
equivocal to the average citizen. Wrapped
within information on mass media, AI has
transformed into a story both the easiest to tell,
and the most difficult to narrate.
In Tarkovsky’s script, the stalker guides writer
and a scientist to take a cable car, steer by the
policemen’s chase, traverse tunnels of dripping
water, detour rooms filled with sand dunes, and
finally approximate the core of “ZONA”: a
“Room” that makes beliefs true. The writer is
concerned about the darkness of human nature
that the Room suggests, while the scientist
wishes to destroy the Room in case villains
would take advantage of it; meanwhile, the
Room endows the stalker with a meaning of
existence.
The exhibition sets up a metaphorical “ZONA”
which embodies our contemporary situation: a
time-space where both science and art are
simultaneously deprived of the power of
autocracy and narratives that command assent.
Artists and researchers involved in this
exhibition blend perspectives of Sophia (bright,
poetic, media imagination) and Alexa (dark,
black-box, technological criticism). They
investigate how AI shuffles global, technical
politics, and the relationships between nations
and civilians; the dark, inhuman labor of using
real humans (in exhausting fashion) to train
“human-like” algorithms; the creation of
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Lying Sophia and Mocking Alexa – An Exhibition on AI and Art. Iris Xinru Long
subjects of surveillance; the aspiration to project
the entire human spiritual architecture on one
single technology form; and the fairy-tale
construals of AI elaborated by mass media.
The concluding “Room” of the exhibition is to
be built by the visitor (“stalker”). It interweaves
the richness, uncountability/non-computability
and vitality of the psychological world, and the
implications of AI in the fundamental menace
and nihilism of our own existence. Would it
“break all the prophecies” like the event horizon
in Vernor Steffen Vinge’s assertions, or be “the
biggest mistake we have ever made” in Steven
Hawking’s alert? The future of humanity is
written in this “Room” containing unlimited
new permutations and combinations.
Biography
Iris Long is a curator. She currently works as a
researcher on art, science and technology at
Central Academy of Fine Arts, with a research
focus on how art responses to the current global
reality of ubiquitous computing and big data.
She lectures on data art at CAFA.
Her artistic work has been exhibited
internationally in venues including CAFA Art
Museum (Beijing), Chronus Art Center
(Shanghai), Power Station of Art (Shanghai),
V2_ Institute for the Unstable Media
(Rotterdam), ISEA (Hong Kong), and so on. Her
work has been shortlisted in Prix Cube Art Prize,
and received an honorable mention in ifva, Hong
Kong. She was shortlisted by the first M21IAAC Award (International Awards for Art
Criticism). Her translation work, Rethinking
Curating: Art after New Media, received a
nomination from AAC Art China awards in
2016.
Iris Long has a master’s degree in Critical
Writing in Art and Design from the Royal
College of Art, UK.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
111
Art of Our Times:
A Temporal Position to Art and Change
Dr. Tanya Toft Ag
City University of Hong Kong
tanyatoft@gmail.com
Abstract
How we understand and approach art from
certain
epistemological
grounds
has
implications for how we trace its genealogies,
formulate its trajectories, understand its
contextual and discursive departures and
impacts, and develop our expectations to what
the art might pursue – and do. This paper
advocates for a holistic, non-linear perspective
on contemporary (media) art as interfering with
our world through time, rather than matter. It
anchors ‘art machines’ in their temporal,
operational core as art of our times, as
implicated with and acting through temporal
experience and ecologies.
Boris Groys has suggested that contemporary
art can be distinguished from that which
prevailed during the modern era significantly by
its core commitment to a notion of radical
temporality, as it engages with a contemporary
situation in which every element may be
considered
‘temporary’.
[1]
In
this
contemporary perspective, art has emerged
conceptually and materially from questions that
pertain to perceptual-ontological conditions
with contemporary technological realities. Art
has reflected and challenged the communicative
conditions of their times – oftentimes critical of
the given dominant conditions of mediated
experience. Besides being radically enabled by
the evolution and mobility of the perceptual lens
of, for example, the video camera and the mobile
phone, art has evolved from concerns with
expanding perception and with liberating the
subject from fixed viewing structures. For
example, through efforts to destabilize fixity of
meaning and remediate power structures of
physical places and their dynamics of social
112
encounters, and through initiatives of expanding
and reconfiguring perception with media
aesthetic ambiance and augmentation of realworld environments. Art both exists and
expresses in contingency with technological
culture and our contemporary communicative
existence.
Here I refer to art of our times not only in terms
of art that engages time-based technologies that
are implicated with perceptual experience, or
which addresses issues of time and perception –
across behavioral modes of e.g. temporal
overlay, disruption, interactivity, forms of
networkedness and telepresence, among many
others. Art of our times denotes how the art
operates by way of interfering with temporalities
of various ecologies of our communicative
existence, as art machines that enact a sense of
‘radical temporality’ – acting as present rather
than represented ‘images,’ in direct, operational
engagement
with
temporal,
perceptual
experience. [2]
I exemplify the operation of art of our times in
a current condition in which processes of change
accelerate through machinic language and
temporal effect, speeding up how we shape the
world through language – from speech and
writing to communication and algorithmic and
machine learning processes. Machinic language
accelerates the infrastructures and interfaces of
how we see, do and make; how we distribute our
subjectivity and sensibilities across multiple
temporalities. [3] Machinic language affects our
behaviors, routines and paths of understanding,
and machinic and scientific processes rooted in
ideas of linearity and relativity extend into
innovation, design, social phenomena and
human relations. Eventually, the human-
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Art of Our Times: A Temporal Position to Art and Change. Tanya Toft Ag
machine symbiosis evolves fast and at temporal
frequencies that bypass human consciousness
and awareness. [4] In this context, the deeper
questions with which art is implicated concern
how our lived experience shapes our human,
cultural and societal evolution. Our experienced,
temporally conditioned sense of presence shapes
our behaviors and our acting out of politics,
economic systems, and cultural norms.
At this point in time, when multiple and
increasingly machinic temporalities structure
and disperse our present being and experience,
and in which hybrid environments of expanded
reality increasingly become our experienced
reality, I propose art of our times as a both
epistemological, time-based contextualization
of media art and an actual mode by which the art
does. I argue that rather than existing as an
object in space, art machines – as art of our times
– act while embedded in our temporal
experience. [5] With this I suggest a conception
of art’s roles – and rules – of existence in the
urban context as deeply implicated with
dynamics of change.
I exemplify how, with art, we can ask: What
ritualistic behaviors are facilitated and
encouraged through our designed, coded and
instructed temporal experiences? Which
cultural, social, political and economic ideas
inform the modalities of our temporal
experiences and immersion, and are these for
example grounded in liberalization, separation
and distance – or in association, interconnection
and co-existence? Do they evoke sameness, or
difference?
I challenge a currently dominant and much
celebrated discourse in the cross-field between
the technical arts, architecture, innovation
design, and urban development that anticipates
art’s direct effect on matter and environment as
an inevitable good, as an effective way of
changing and optimizing our environments.
This is a discourse that nonetheless does not
account for how the ‘art machine’ complies with
dominant narratives of politics, economy, or
culture, affects ecologies of evolution, and
results in intuitive-behavioral modes of
indifference and production of more of the same.
Instead of reproducing a Western-anchored,
anthropocentric discourse obsessed with
controlling and changing matter, with the
concept of art of our times I examine a temporal
perspective on art machines and advocate for a
holistic perspective on how art affects ecologies
of material, memory, and behavior, by affecting
our relation to time and temporal experience.
References
1. Boris Groys, “On the New,” in Art Power
(Cambridge: MIT Press, 2008), 40.
2. Henri Bergson, Matter and Memory (1911),
trans. N.M.P. and W.S.P (Mansfield Centre:
Martino Publishing, 2011), 28.
3. Jacques Rancière, The Politics of Aesthetics,
ed. and trans. Gabriel Rockhill (London and
New York: Bloomsbury Academic), 2015.
4. N. Katherine Hayles, How We Think: Digital
Media and Contemporary Technogenesis
(Chicago and London: The University of
Chicago Press), 2012.
5. Richard Grusin, “Radical Mediation”,
Critical Inquiry 42, no. 1 (2015).
Biography
Dr. Tanya Toft Ag is a curator, researcher,
writer and lecturer on urban media aesthetic
phenomena and media art’s engagement with
societal and urban change. She gained her
doctoral degree from Copenhagen University
with visiting scholarships at Columbia
University and Konstfack (CuratorLab), and
MA degrees from The New School and
Copenhagen University. Her curatorial practice
evolves with media art and media architecture in
urban environments, and she has held keynotes
and presented her critical perspectives on art and
urban media worldwide. She is editor of Digital
Dynamics in Nordic Contemporary Art
(Intellect, 2019) and co-editor of What Urban
Media Art Can Do – Why, When, Where, and
How? (av edition, 2016). In 2017 she cofounded the globally networked Urban Media
Art Academy. Her current research is situated at
School of Creative Media, City University of
Hong Kong.
www.tanyatoft.com
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
113
Do Machines Produce Art? No. (A Systems-Theoretic Answer.)
Michael Straeubig
University of Plymouth
michael.straeubig@plymouth.ac.uk
Abstract
Machines do not produce art, social systems do.
Machines and Art
Since early experiments with computergenerated art in the mid 1960s, the idea of “art
machines,” entities that are not merely tools or
assistants for human artists but capable of
autonomous art production, has undergone a
significant development. [1] Both technological
progress and shifts in art appreciation have
contributed to this.
Our modern understanding of (capital-A) Art
and the related concept of Fine Arts emerged
during the 18th century. [2] However, like any
established notion of art this understanding has
faced
critical
re-negotiation.
Thus,
Postmodernism rattled those fundaments while
Machine Art disturbs newly formed agreements
what constitutes art. [3] Proponents of
algorithmic art seek to re-define aesthetic
concepts in information processing terms,
questioning conventional anthropocentrism.
[4][5][6]
Recent contributions like Michael Matejas’
Expressive AI, Leonel Moura’s stigmergic
robots and Marius Klingemann’s uncanny
neural imagery push the aesthetic boundaries of
generative machines and computational
procedures. [7][8][9]
But do those machines and algorithms produce
art? I give an answer that I base on Niklas
Luhmann’s systems-theoretic thinking, and this
answer is: no. [10] Likewise, humans do not
produce art either. Art is not created by any
biological or nonbiological entity, but within
social systems, constructed through recursive
networks of communication. [11]
114
The answer does not change if we recast
generative art as variants of the Turing Test.
[12][13] It does not even change if we
conceptualize machines and humans as
ensembles or take into consideration the fluidity
of their difference. [14][15]
This observation invites us to refocus on
different distinctions than the still prevalent
discourse around humans vs. machines. To
understand the ramifications of the shift from
the artist as an individual auteur to art as a social
system, it is useful to observe and explore forms
of art that make this approach visible. “The new
artist” by Alex Straschnoy et al. presents a robot
that is performing for a robotic audience. [16]
Techne is an algorithmic community that
produces as well as mutually critiques digital art.
[17] In both projects, the relationship between
art, artist and audience is re-negotiated and
humans become second-order observers of the
art production. [18]
Machines do not produce art, social systems
do. We may begin to ignore the difference
between human and machine; it does not make
a difference. What we need to do is to restructure
our expectations and to invite more machines
into our art system.
To achieve this, it may be well worthwhile to
revisit systems art as a bridge between
cybernetic tradition and currently emerging
generative techniques. [19][20] Before that we
have to update concepts of art and systems in
order to understand the art of machines. [21]
References
1. Grant D. Taylor, When the Machine Made
Art: The Troubled History of Computer Art
(2014).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Do Machines Produce Art? No. (A Systems-Theoretic Answer). Michael Straeubig
2. Paul Oskar Kristeller, “The Modern System
of the Arts: A Study in the History of Aesthetics
Part I,” Journal of the History of Ideas 12, no. 4
(Oct 1951). https://doi.org/10.2307/2707484.
3. Louise Norto, “The Richard Mutt Case.” The
Blind Man, May 1917.
4.
Frieder
Nake,
Ästhetik
als
Informationsverarbeitung: Grundlagen und
Anwendungen der Informatik im Bereich
ästhetischer Produktion und Kritik (Wien:
Springer, 1974).
5. Jürgen Schmidhuber, “Developmental
Robotics, Optimal Artificial Curiosity,
Creativity, Music, and the Fine Arts,”
Connection Science 18, no. 2 (2006): 173–187.
6. Leon A. Gatys, Alexander S. Ecker, and
Matthias Bethge. “A Neural Algorithm of
Artistic Style,” ArXiv Preprint
ArXiv:
1508.06576, 2015.
https://arxiv.org/abs/1508.06576.
7. Michael Mateas, “Expressive AI - A Hybrid
Art and Science Practice,” Leonardo: Journal of
the International Society for Arts, Sciences, and
Technology 34, no. 2 (2001): 147–53.
8. Leonel Moura, “Machines That Make Art.” In
Robots and Art, edited by Damith Herath,
Christian Kroos, and Stelarc (New York, NY:
Springer Berlin Heidelberg, 2016), 255–69.
9. Mario Klingemann, “Quasimondo | Mario
Klingemann,
Artist”
(2018).
http://
underdestruction.com/.
10. Niklas Luhmann, Social Systems. Writing
Science (Stanford, Cal: Stanford University
Press, 1996).
11. Niklas Luhmann. “Das Kunstwerk Und Die
Selbstreproduktion Der Kunst,” Delfin, 3
(1984): 51–69.
12. Ahmed Elgammal, Bingchen Liu, Mohamed
Elhoseiny, and Marian Mazzone, “CAN:
Creative Adversarial Networks Generating ‘Art’
by Learning About Styles and Deviating from
Style Norms,” 2017.
13. Jörg Räwel. “Können Maschinen denken?”
Telepolis, August 4, 2018. https://www.heise
.de/tp/features/Koennen-Maschinen-denken4117648.html.
14. Bruno Latour, “A Collective of Humans and
Nonhumans: Following Daedalus’s Labyrinth.”
In Pandora’s Hope: Essays on the Reality of
Science Studies (Cambridge, Mass: Harvard
University Press, 1999), 174–215.
15. Victor Marques, and Carlos Brito, “The Rise
and Fall of the Machine Metaphor:
Organizational Similarities and Differences
Between Machines and Living Beings,”
Verifiche XLIII, no. 1–4, (2014): 77–111.
16. Axel Straschnoy, Ben Brown, Garth Zeglin,
Geoff Gordon, Iheanyi Umez-Eronini, Marek
Michalowski, Paul Scerri, and Sue Ann Hong.
“The New Artist” (2008). http://www.the-newartist.info/.
17. Johnathan Pagnutti, Kate Compton, and Jim
Whitehead, “Do You Like This Art I Made You:
Introducing Techne, A Creative Artbot
Commune,” Proceedings of 1st International
Joint Conference of DiGRA and FDG, 2016.
18. Niklas Luhmann, “Observation of the First
and of the Second Order.” In Art as a Social
System, Meridian, Crossing Aesthetics.
(Stanford, Ca: Stanford University Press, 2000),
54–101.
19. Jack Burnham, “Systems Esthetics,”
Artforum (1968).
20. Edward A Shanken, “Reprogramming
Systems
Aesthetics:
A
Strategic
Historiography,” Proceedings of the Digital
Arts and Culture, UC Irvine (2009).
21. Niklas Luhmann, Art as a Social System
(Stanford, Ca: Stanford University Press, 2000)
Biography
Michael Straeubig (@crcdng) is a Marie Curie
Fellow and former Award Leader for Game Arts
and Design at Plymouth University. He is
researching and exploring the relationships
between systems, play and games in various
media with a focus on mixed reality and
posthuman play.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
115
The Janus-Face of Facial Recognition Software
Romi Mikulinsky Ph.D.
Bezalel Academy of Arts and Design, Jerusalem
rominska@gmail.com
Abstract
The human desire “to know the face in its most
transitory and bizarre manifestations” was
stimulated by the use of photography, argues
film historian Tom Gunning. [1] Subsequently,
it also inspired the invention of motion pictures.
The drive to “know the face” and to decipher its
diverse characteristics and manifestations
continues to inspire scientists, health and
advertising professionals, as well as law
enforcement experts, in their efforts to develop
automated facial recognition systems. Such
systems are used for identifying human faces
and distinguishing them from one another, and
for recognizing human facial expression.
This tendency to teach computers to “see” the
human face is part of a broader effort to
automate vision – to create machines that not
only can generate images, but also analyse their
content. As artist and geographer Trevor Paglen
asserts, most images produced today are created
by machines for machines to decipher. [2] For
Paglen this “machine-to-machine seeing” is
dramatically changing many spheres of human
lives. Machine vision systems and digital
images have permeated and are now
transforming economy and transportation,
industrial operations, law enforcement and
urban lives, in autonomous cars and “smart”
cities.
The rise of machine-to-machine seeing
apparatuses has also impacted art. We now hear
of machines making art, almost independently
of humans. But machines and machine learning
(ML) are also affecting the art world in a more
immediate way. Since various manifestations of
artificial intelligence (AI) and ML have become
a cultural phenomenon, artists and designers are
responding to them, and are already
investigating ways of harnessing ML and
computer vision to their arsenal.
116
This paper contextualizes efforts made by
artists and designers to reinvent facial
recognition technology so that it can be put to
other uses than computerized forms of
surveillance. I examine artworks that take
automated face perception technologies,
reverse-engineer them, re-appropriate them and
reveal their biases. These include, for example,
Adam Harvey’s “DIY Camouflage” (2010-),
Shinseungback Kimyonghun’s “Cloud Face”
(2015), and Trevor Paglen’s “A Study of
Invisible Images” (2017). I then go on to explore
examples from art, fashion and design that
propose an alternative visibility, one that renders
faces unrecognizable to computer vision
systems: Zach Blas’ “Facial Weaponization”
(2011-2014) and “Face Cages” (2013-2016), or
Hungry’s distorted drag (which can also be seen
in the body of Björk‘s recently released album
Utopia). Inspired by drag, theatre and religion,
and drawing on queer ideology, these masks,
jewellery and make-up can be considered as
anticipatory of future avant-garde practices
designed to make faces informatically invisible
and inaccessible to the machine, empowering us
to choose between and play on our identities.
Ironically, the right to disappear from the
machinic gaze, to fly under the surveillance
radar, straddles the line between what is socially
acceptable and what appears grotesque or
inadequate. The right to disappear also
demarcates what computers and humans can or
cannot see - or rather, make sense of. This paper
proceeds by undermining the concept of having
one stable identity, of one’s face as an
“unchanging repository of personal information
from which we can collect data about identity,”
as feminist theorist Shoshana Amielle Magnet
puts it. [3]
Going beyond the vantage point of
contemporary AI and machines’ ability to detect
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Janus-Face of Facial Recognition Software. Romi Mikulinsky
and decode faces as means of power and control,
the emerging technical developments call for
possibilities much less familiar, and perhaps
much more exciting, than government
surveillance. Such possibilities promise to
reassess the expressive capacities of the face,
and its multiple features and qualities, inviting a
revolutionary outlook on culture and
acceptability,
identification
and
social
interaction. Can we conceive of new uses and
new narratives for facial recognition
technology?
References
1. Tom Gunning, “In Your Face: Physiognomy,
Photography, and the Gnostic Mission of Early
Film,” Modernism/Modernity 4, no. 1 (1997).
2. Tevor Paglen, “Invisible Images (Your
Pictures Are Looking At You)”. The New
Inquiry (December 8 2016). https://
thenewinquiry.com/invisible-images-yourpictures-are-looking-at-you/.
3. Shoshana Amielle Magnet, When Biometrics
Fail: Gender, Race, and the
4. Technology of Identity (Durham, NC: Duke
University Press, 2011).
Biography
Romi Mikulinsky is the head of the Master of
Design (MDes) program in Industrial
Design and a senior lecturer at the Bezalel
Academy of Arts and Design in Jerusalem. Her
dissertation at the University of Toronto's
English Dept. was dedicated to photography,
memory, and trauma in literature and film. Dr
Mikulinsky researches and lectures on the future
of reading and writing as well as on the various
interactions of words and images, texts, codes,
and communities in the information age. She has
worked with various startup companies and
media websites, corporations and municipalities
on implementing innovative communication
technologies. She served as the Director of the
Shpilman Institute for Photography and worked
with various art museums in Israel. Her book
Digital Clutter: Topics in Digital Culture, coauthored with Prof. Sheizaf Rafaeli, is to be
published in 2019.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
117
A Pixel-Free Display Using Squid's Chromatophores
Juppo Yokokawa
Graduate School of Design, Kyushu
University 2DS18079Y@s.kyushu-u.ac.jp
Haruki Muta
Ryo Adachi
Hiroshi Ito
Kazuhiro Jo
Graduate School of Design,
Kyushu University
intrjctn@gmail.com
Graduate School of Design,
Kyushu University
mumanddad6660816@gma
il.com
Faculty of Design, Kyushu
University
hito@design.kyushu-u.ac.jp
Faculty of Design, Kyushu
University / YCAM
jo@jp.org
Abstract
In this ongoing project, we propose a pixel-free
display using a squid's chromatophores. The
squid's body surface has cells which contain a
pigment called chromatophores. Instead of
using pixels of standard visual displays, we
stimulated the chromatophores of a squid by
sending sound signals of accompanied music to
its body through electronic probes and made an
experimental music video.
chromatophores by sound signals and shot our
experiments as a music video.
2.
Related work
Backyard Brains Inc. introduces an experiment
to stimulate chromatophores by sound signals
from iPod. [3] Based on the trial, we measured
Chromatophore’s frequency response (Fig. 1).
[4]
1. Introduction
Our daily lives are surrounded by various types
of visual displays, such as computer monitors,
smart phones, projectors, VR headsets etc. The
information carried by them varies depending on
the applications or purposes such as videos,
news, slides, games, etc.
However, almost all of the displays are the
same in that they are composed of pixels. Even
in the case of computational “generative” art
like biological simulation, it is inevitable to
compute and render an image by pixel unit. [1]
In this project, we consider squids as an
alternative display free from pixels. The squid's
body surface has cells which contain a pigment
called chromatophores. [2] The squid freely
changes its body color by changing the size of
the pigment with electric signals from nerve
cells to the chromatophores. To take advantage
of
these
features,
we
stimulated
118
Fig. 1. The area of chromatophore per frequency [4]
Based on the experiments, we seek a best
relationship between chromatophores and sound
signals in a form of music.
3.
Our approach
3-1. Sound
As a preliminary experiment, we first explored
how chromatophores respond to sound signals.
We attached a copper needle to a sound cable
as an electronic probe and directly produced
sound signals from a computer into the surface
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
A Pixel-Free Display Using Squid’s Chromatophores. Juppo Yokokawa, Haruki Muta, Ryo Adachi, Hiroshi Ito,
Kazuhiro Jo
of a fresh squid (Fig. 2). The result showed that
the lower frequencies (i.e., the bandwidth of
drum and bass) cause a higher effect to the
chromatophore.
Fig. 3. Footage form a music video
Fig. 2. Setup of the experiment.
To make music (i.e., sound signals)
appropriate to the squid, we finely generated
waveforms of the music (i.e., sound signals)
with a numerical programming environment,
MATLAB, by checking the effectiveness with
different arrangements. We generated each
waveform with MATLAB, and arranged the
waveforms in a form of music with standard
music production software Ableton Live10.
3-2. Shooting
We shot our experiments by Canon 60D with
Canon EF-S 35mm f/2.8 Macro IS STM. We
shot several footages by changing the
stimulation point of the squid with the same
music. After the shoot, we edited the footages
by Adobe Premiere Pro CC 2018. To keep the
consistency between the sound and the display,
we limited our edits to cutting, masking, and
subtle color grading.
4.
Result
As a result, we made a music video as one
application of the pixel-free display using the
squid's chromatophores (https://youtu.be/66RoX2h8aI ). The duration of the video is 2min
45sec, and the resolution is 1080p (Fig. 3).
Even though display successfully escaped
from the use of pixels, the video itself remained
in pixel form. Therefore, as future work, we
plan to show the display in real time as the
form of live performance.
Acknowledgements
This work was supported by JSPS KAKENHI
Grant Number JP17H04772.
References
1. Hartmut Bohnacker, et al. Generative
Design: Visualize, Program, and Create with
Processing (2012).
2. R.A. Cloney, and E Florey. Ultrastructure
of cephalopod chromatophore organs. Cell and
Tissue Research 89, (1968) : 250-280
3. Backyard Brains, “Insane in the
Chromatophores,” (2012).
http://blog.backyardbrains.com/2012/08/insan
e-in-the-chromatophores/.
4. Ryo Adachi. 2017. “Frequency Response of
Chromatophores to Electrical stimuli in
Uroteuthis edulis.” Bachelor thesis, School of
Design, Kyushu University. (in Japanese).
Biography
Juppo Yokokawa is in the 1st year of a Master’s
degree in Graduate School of Design, Kyushu
University. He has received the Bachelor in
visual communication design from the School
of Design, Kyushu University in 2018. His
research interests include media art, especially
bio art and kinetic art, under the supervision of
Kazuhiro Jo.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
119
VR and AI: The Interface for Human and Non-Human Agents
Lukasz Mirocha
School of Creative Media, City University of Hong Kong
lukasz.mirocha@my.cityu.edu.hk
Abstract
The paper analyses how real-time 3D graphics,
VR and AI, allow users to create a new type of
interface/environment for human and nonhuman agents’ collaboration and learning. It
argues that spatial media (VR, AR) should be
considered as real-time software and media
interfaces, rather than multi-media projections.
The study is informed by software and platform
studies, critical theory and media studies
perspectives.
The Ultimate Display as an Interface
In 1965, Ivan Sutherland suggested that "the
ultimate display would be a room within which
the computer can control the existence of
matter.” [1] Nearly a decade later, in one of the
Star Trek: The Animated Series' episodes, a
peculiar technology appeared – the holodeck.
Today, we could describe it as an ultimate VR
environment designed for work and
entertainment. Lately, thanks to the combination
of latest developments in AI, computer graphics
and VR – we seem to be closer to turn these 20th
century dreams into reality.
Berry and Dieter write that in the last decade,
“computation [has become] experimental,
spatial and materialized in its implementation,
embedded within the environment and […] even
within the body.”[2] Following Grau, we could
ask about affordances and limitations of
immersive and real-time software media, and
objectify them "through knowledge and critique
of the image production methods.” [3]
Analysing a new software and media ecology
for the creation of virtual or hybrid
environments that open new dimensions in
human-machine interaction can help us to
understand not only the conditions behind these
phenomena but also their wider cultural impact.
120
Simulated Reality – NVidia’s Applications
At Siggraph 2017, NVidia showed the Isaac
Robot that had been trained in a virtually created
world to play dominos with human players.
Isaac Robot was firstly trained in Isaac Sim, a
virtual training environment. [4] The
environment was based on a modified version of
a game engine – Unreal. Isaac Sim offers fully
integrated and high-fidelity visuals and physics
simulation. Thanks to a set of AI algorithms for
deep reinforced learning in a virtual
environment, a virtual robot can iterate and learn
much faster than in a real world. The same
rationale lies behind NVidia Drive Sim. [5] It is
a virtual training environment that utilizes high
fidelity visuals and physics to simulate realworld driving in different weather lightning and
traffic conditions. The photorealistic data
streams generated by the software are
compatible with the same sensors and chips that
are used in physical autonomous cars currently
tested by the company. Ultimately, a physical
testing car can be firstly trained in Drive Sim
and then use its knowledge in a real-life
situation.
At a certain level of generalization, we can
then conclude that the rationale behind NVidia’s
experiments is to have two instances of agents
(robots and cars), the virtual one that is trained
through reinforced learning techniques, and the
physical one, that consists of a physical "body"
and makes use of data gathered by its virtual
counterpart to perform tasks in a non-virtual
environment. By "body" we mean the same
array of sensors and actuators for perception,
navigation and manipulation both in the virtual
and in the physical world. NVidia’s applications
of real-time graphics and AI is a continuation of
research conducted by other companies and
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
VR and AI: The Interface for Human and Non-Human Agents. Lukasz Mirocha
researchers, e.g. Xerox Research (pre-training of
computer vision algorithms for autonomous
vehicles in a virtual environment created by
Unity 3D engine) [6]; or Princeton, Darmstadt
University and Intel (real-time recognition of
objects, such as road signs, people, and cars, by
a machine learning system in a modified video
game environment (GTA V). [7]
VR as an Approximate and Simplified
World Simulation and Interface
NVidia’s achievements prove that today we can
use sophisticated software/hardware ecologies
to create virtual environments. Conceptually,
these environments take advantage of the
principles of "approximation to" and
"simplification of" when simulating selected
properties of the physical world - visuals, audio,
physics etc. [8] As a result, at a very basic level,
these environments function as streams of data
and algorithms processed in real-time by
efficient hardware platforms. These data streams
can be converted into output material (cues) that
can be delivered at the same time both to human
agents (e.g. as a dimensional 3D space
visualized and interacted with as a VR
experience), and to non-human agents. The
simplification of the physical world to several
data streams (for instance video feed, radar, and
proximity sensor) that would not be suitable for
human agents, is sufficient for the robot to
operate in a physical environment. We can
observe a comparable situation when the roles
are reversed. Human agents can operate in a VR
environment that offers an approximate
simulation of physical world by stimulating
human senses with simplified cues (visual,
auditory, and haptics).
If we were to assess the conceptual status of
virtual environments used in the examples
presented above, we could follow Galloway's
idea of interfaces as "processes" and “zones of
activity.”
[9]
Immersive
CGI-based
environments could be considered, not as multisensory projections, but rather as interactive,
real-time interfaces. Grau observed that
technological developments, like VR, bring us
closer to “images as dynamic virtual spaces."
[10] In fact, the key characteristic behind VR is
that it is a real-time (dynamic) and multi-sensual
(multi-cues) medium, where, thanks to a
projection of convincing stimuli, an immersant
(human or non-human) can feel a sense of
presence inside a virtual space. Bolter and
Grusin explicitly say "the responsive character
of the environment, gives VR its sense of
presence." [11]
The presented examples show that VR
environments are in fact zones of activity that
simulate ontologies, create horizons of
possibility – defined by affordances of systems
that can deliver specific visual, auditory, haptic
and data cues to the agents involved. The unique
design affordances and constraints implemented
in VR environments shape their status as
cultural software that today mediates people's
interaction with media and other people. Soon,
as the Isaac example shows, they will also
mediate human-non-human interaction and
communication. Therefore, if we consider VR
environments as media interfaces, we are getting
access to yet another perspective for analysing
different models of representing and accessing
digital information in today's media ecology,
populated both by human/non-human agents.
References
1. Ivan E. Sutherland, “The ultimate display,”
Proceedings of the IFIP Congress (1965), 506508.
2. David Berry and Michael Dieter, eds.
Postdigital Aesthetics: Art, Computation and
Design (Basingstoke: Palgrave Macmillan,
2015), 3.
3. Oliver Grau, Virtual art: From Illusion to
Immersion (Cambridge, Mass: MIT Press,
2007), 202.
4. Voices of VR website, accessed August 15,
2018, https://bit.ly/2A7N44w
[5] Nvidia website, accessed August 15, 2018,
https://bit.ly/2pJv8r9
[6] Adrien Gaidon et al., Virtual Worlds as
Proxy for Multi-Object Tracking Analysis
(2016).
[7] Princeton University website, accessed
August 12, 2018. https://bit.ly/2vbr1YO
[8] Jason Gregory, Game Engine Architecture
(Boca Raton: Taylor & Francis, CRC Press,
2018), 9.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
121
Part II. Scholarly Abstracts
[9] Alexander Galloway, The Interface Effect
(Cambridge, UK: Polity, 2012), VII, 36.
[10] Oliver Grau, VirtualArt: From Illusion to
Immersion (Cambridge, Mass: MIT Press,
2007), 345.
[11] Jay David Bolter, and Richard Grusin,
Remediation: Understanding New Media
(Cambridge, Mass.: MIT Press, 2003), 16.
Biography
Lukasz Mirocha is a PhD Candidate at SCM,
CityU. He is interested in media aesthetics,
design (particularly VR, AR, MR) and software
studies.
122
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Part III
Artistic project abstracts
123
SHAPES of the Future: When Art Machines Pass the Turing Test
Terry Trickett
Trickett Associates
terrytrick@mac.com
Abstract
SHAPES provides opportunities for me to
express the intrinsic relationship that exists
between music and abstract imagery. In creating
animated imagery which reflects, visually, the
six movements of Bach’s suite for cello
(arranged for solo clarinet), I produce apparently
coincidental similarities between some of my
computer-based images and the work of certain
abstract artists. Why did it happen? For the
moment, human hands guide both types of
results; Art Machines cannot produce works of
art unaided. But this situation will change; once
a computer has passed the Turing Test, proving
it has achieved the equivalent of human-like
intelligence, it will become possible for the
‘consciousness’ of an Art Machine to match, or
exceed, human sources of creative energy.
Digital Simulacra
Two years ago I produced SHAPES, a piece of
Visual Music based on an arrangement of J S
Bach’s first cello suite, which provided
opportunities for expressing the intrinsic
relationship between music and abstract
imagery. Why did I do this? I think, perhaps
unconsciously, I was reflecting the cybernetic
idea that communication, for the most part,
consists of harmony and counterpoint –
simultaneous or parallel signals, images, tones,
feelings, environmental factors that are
continually blending and modifying each other.
I’m seeing this from the point of view of a
chamber musician when, during performances, I
inhabit “an indissoluble environment of
information.” [1] It’s a view I share with
Stephen Nachmanovitch (like myself an artistmusician) whose mentor was Gregory Bateson,
a key contributor to the science of British
124
cybernetics from the 1960s onwards, although
for much of his life he worked in the USA.
Where did my visual communication of
musical harmony and counterpoint take me?
The results surprised me; what I had discovered,
as it turned out, were apparently coincidental
similarities between my images and the styles
adopted by nine well-known abstract artists.
Somehow, through a process of continuous
feedback, recursive self-modifying behavior and
on-going interactional adjustment, I had
produced artworks or, at least, taken in the
artworks that others had made. Retrospectively,
I recognize this as a cybernetic process where
action is constantly conditioned by feedback, by
the performance environment, and by finely
differentiated systems of both the brain’s longterm and short-term memory. My aim had not
been to imitate but, in fact, I found that I had
created, almost inadvertently, a series of digital
simulacra of specific abstract works of art.
Figures 1 – 9, show the outcomes of SHAPES as
they occur, either once or twice, in each of the
six movements of Bach’s suite.
Fig 1. The repetitive motif that occurs in the opening bars of the
Prelude brings a photograph by Aleksandr Rodchenko to mind.
Fig 2. At a midway point in the Prelude, my images jump ahead
half a century to reflect the Op Art works of Victor Vasarely.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
SHAPES of the Future: When Art Machines Pass the Turing Test. Terry Trickett
Fig 3. In the opening section of Allemande, I project 3D forms
Fig 9. The Gigue generates a changing kaleidoscope of images
on to a 2D surface. A method also used by Lybov Popova.
where the geometry of Ben Nicholson’s work is evident.
Fig 4. In the second part of the Allemande, rectilinear blocks of
color produce effects similar to those of Giacomo Balla.
Fig 5. Black and white patterns in the Courante conjure up the
mesmeric effects of Op Art works by Bridget Riley.
Fig 6. In the Sarabande, SHAPES of colour expressing the
intensity of Bach’s music reflect those of Ivon Hitchens.
Fig 7. As the Sarabande develops in complexity and texture, it’s
abstractions similar to Paul Klee’s that begin to emerge.
Fig 8. Minuets I & II, produce a spontaneous eruption of
SHAPES resembling the designs of Hans and Sophie Arp.
As Bateson opined in a lecture, Simple
Thinking, given in 1980 (he died the same year),
“creativity finds a simple pattern that can
contain great complexities and contradictions
without diminishing them.” [1] I like to think
that just such a pattern enabled me to simulate,
in my digital simulacra, the work of nine
abstract artists without in any way belittling the
originality of their images. Let’s call this,
matching art with art but, even here, the
cybernetic processes involved place it well
beyond the capabilities of any unaided Art
Machine. Such a task needs the conscious mind
of an artist “looking from the inside in the
knowledge that everything is new.” [2] No nonhuman mind can, as yet, achieve this level of
objectivity. It will happen only after a computer
has passed the Turing Test thus proving that it
has achieved the equivalent of human-level
intelligence. [3] Alan Turing’s own prediction
for when this might occur was the year 2052.
Significant inroads on this date have been
predicted by Ray Kurzweil who believes that
‘singularity’ (where artificial intelligence
triggers runaway technological growth) will be
achieved by 2029! We’re almost in sight of a
time when a machine with human intelligence
can become a source of creative energy equal to
anything that can be achieved by today’s artists.
But, until that time, we cannot expect a machine
to reach (or exceed) human levels of creativity.
References
1. Stephen Nachmanovitch, "Bateson and the
Arts", Kybernetes, 36, no. 7/8 (2007): 11221133, accessed 14 October 2018,
https://doi.org/10.1108/03684920710777919.
2. Carl G. Jung, The Red Book, Edited by S.
Shamdasani (New York: WW Norton & Co.).
3. Alan Turing, “Computing Machinery and
Intelligence” (1950), Mind, Vol. LIX, Issue 236.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
125
Part III. Artistic project abstracts
Biography
Terry Trickett produces and performs visual
music. The subjects he chooses range far and
wide, often taking him into unchartered territory
– places where, sometimes, he invades the realm
of science and, with the aid of music, brings the
worlds of science and art closer together. In
creating a piece of visual music, his aim is to
share and communicate an idea through a
process that combines animated visual imagery
with musical performance, usually on solo
clarinet.
126
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
“Opinions” – Body Movements and Sound
Yanbin Song
Parsons School of Design
songy941@newschool.edu
Abstract
"Opinions" is a project that collects hand
movements in conversations and turns them
into sounds. Under the context of having a
conversation, this project creates an experience
that enables people to pay more attention and
appreciation to hand movements through
converting movements into sound. A website is
then produced collecting all the conversation
footages and sound recordings, forming a space
for hands/arms “expressing” their own opinions
and perspectives with sound outputs.
Many scholars and artists focus on the body
as an instrument and the body as a support for
expressing opinions. This proposal links these
elements together and argues that body
language/movements convey the same opinions
for the mind and should be given as much
attention as oral expressions, and this can be
achieved by turning body languages to sound
(which is the same form as oral expression –
sound as output) based on the movements.
Through this process, the audience will have
more attention and appreciation towards their
body movements. In the same way, this project
may inspire one to think from another
perspective and start to understand more about
others’ perspectives.
Research
Body Movements
The importance of body movements can be
illustrated from several aspects. Using the body
to express opinions in protests is one of the
strongest cases. It is studied that using body
gestures and postures to express political views
supports the articulation of moral intuitions. [1]
The power of body expression is that it is
visual; bodies in protests stand for political
opinions and can greatly affect the surrounding
environments and represent solid opinions. The
importance of the vulnerability of a body is
also one of the reasons protests have always
been a popular way of expressing opinions
despite of its dangerousness. For example,
during Tiananmen Square Protests of 1989, an
autonomous man stood out using his single
body and posture against the tanks. This body
and its act was so powerful that it changed the
whole consequence of the protest and has been
spread as a historic moment. Demonstrating
vulnerability of human bodies in a protest will
inevitably trigger the self-reflections and
rethinking of other bodies and parties in the
protests. Parviainen calls such body behaviors
as “resisting choreographies.” [1]
More specifically, we use our body to
demonstrate opinions in daily life. It is studied
that humans tend to use not only verbal
expression but and also “symbolic” expression,
i.e. body languages such as hand gestures along
with facial expressions, to deliver opinions and
attitude towards the content of communication
when having conversations. [2]
However, it is also argued that such
expressions are often paid little attention in
themselves. During a conversation people
generally focus most on the idea and content
that are being communicated, thus, there lacks
awareness of the iconic facial expressions and
hand gestures used in conversation. [2]
Body and sound
A rising amount of applications that use the
body as an instrument is emerging, especially
in performance art. In the study of relations
between body, instrument and technology,
Schroeder emphasizes that in some interactive
music technologies, movements by the body
can be given back as sound. He suggests that
the body serves as the most important role in a
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
127
Part III. Artistic project abstracts
performance environment, it moves and also
can listen. [3] Brown argues that body in
motion should be converted to sound and
reconnected to the ears. [4] In addition, Iddon
in his study about body/instrument relations
blurred the boundaries between the body of the
performer and body of the instrument,
providing a new direction which proposes the
integration of performer and instrument for a
musical entity. He also tends to blur the
distinctions between man and machines, which
brings Haraway’s cyborg model to the
discussion. [5]
Project Form
Experience flow of participants
In an indoor environment, one visitor at a time
is asked to have a conversation with the author.
A monitor detects the hand movements,
meanwhile the conversation is recorded in
audio-visual
form
with
participants’
agreements (their voice and face are not
published and only serve research and
documentation purposes).
After the conversation, the recorded footage
focusing on the hand and the sound generated
is played back to the visitors. This material is
then uploaded to the project website where
people can visit to listen to sound being
produced by the other participants on various
topics.
https://songyanbin1996.wixsite.com/opinions
Mechanism and setup
Fig 1. “Opinions”, 2018, Yanbin Song, New Media Interactive
Sound Art Project, Copyright Yanbin Song.
An Arduino board is set up with two photocell
sensors that detect the amount of light being
blocked by arms during a conversation. The
128
data collected by the two photocells is sent to
the software Max msp. Max generates sound
according to the movements that is played and
recorded inside the laptop using extension
software, Soundflower. The resulting sound
output has a high quality and is in the
meantime non-interruptive to the conversation.
A larger monitor screen is also set up for
visitors to watch the recordings.
Leap motion sensor replaces the Arduino set
to serve as a sensor and to collect finger
positions. Max msp is still used to generate the
sound.
References
1. J Parviainen, “Choreographing resistances:
Spatial–kinaesthetic intelligence and bodily
knowledge as political tools in activist
work,” Mobilities, 5, no.3 (2010): 311-329.
2. J. Allwood, “Bodily communication
dimensions of expression and content.”
In Multimodality in Language and Speech
Systems (Springer, Dordrecht, 2002), 7-26.
3. F. Schroeder, “Bodily instruments and
instrumental bodies: critical views on the
relation of body and instrument in
technologically
informed
performance
environments” (2006).
4. N. Brown, “The flux between sounding and
sound: Towards a relational understanding of
music as embodied action. Contemporary
Music Review, 25, nos. 1-2 (2006): 37-46.
5. M. Iddon, “ On the Entropy Circuit: Brian
Ferneyhough's Time and Motion Study II,”
Contemporary Music Review, 25, nos 1-2
(2006): 93-105.
Biography
Yanbin Song is a designer and an adventurer.
She studied in London UCL for a bachelor’s
degree in Urban Planning, Design &
Management, and is currently studying in New
York, Parsons for the MFA program Design
and Technology. Spending time in different
cities and countries makes Yanbin a more
global citizen and care more about lives and
societies. She is attempting to make a social
impact by bringing provoking thoughts through
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Opinions – Body Movements and Sound. Yanbin Song
her multi-media
yanbinsong.com
and
interactive
works.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
129
Constellation — Call Your Personalized Constellation
Constellation — 呼唤你的星系坐标
Nan Zhao 赵楠
New York University Shanghai
nanzhao@nyu.edu
Constellation
—
Abstract
Call Your Personalized
Constellation (Constellation — 呼唤你的星系
坐标) is a website that generates a personalized
constellation for you based on the Chinese
Taoism BAGUA interpreting your birthday and
voice. The idea behind the project is "Design for
Everyone" and to "redefine the Relationship
between individual life meanings and the vast
universe.” It links generative arts, personalized
design, and voice recognition together. It is also
the process of exploring the beauty of nature
through codes.
To experience the journey, you first type in
your name and birthday to see the initial star
positions. Then you input your voice by reading
a poem. The voice information will be
recognized with voice recognition API. All the
personal information is further conversed into a
systematic, personalized constellation through
WebGL technology, generative art algorithms,
and BAGUA philosophy. You will see how your
personal information generates a constellation
system, step by step, through the well-designed
user experience journey. Entirely different from
those static twelve existing constellations in the
world, this constellation is dynamite and feels
alive.
Fig 1. Title picture and marketing material of Constellation-Call
Your Personalized Constellation, 2018, Nan Zhao, digital/print,
Creative Commons Attribution-NonCommercial-ShareAlike
2.0 Generic license
Heading
Fig 2-3. Photos of an exhibition and people experiencing
Constellation-Call Your Personalized Constellation, 2018, Nan
130
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Constellation – Call Your Personalized Constellation. Nan Zhao
Zhao, digital/photo, Creative Commons
NonCommercial-ShareAlike 2.0 Generic license
Attribution-
Fig 3. A collection of users’ constellations gathered from
Constellation-Call Your Personalized Constellation, 2018, Nan
Zhao, digital, Creative Commons Attribution-NonCommercialShareAlike 2.0 Generic license
Project Details
This is a work of screen-based generative art,
programming art, reactive interface, and user
experience design. The technology is WebGL,
React.JS, Voice Recognition API, and
HTML/CSS. The project is displayed at
https://vimeo.com/269553307 and documented
at
https://nanzhaoportfolio.wordpress.com/portfol
io/webgl-ued-constellation/
Biography
Nan Zhao is an interaction designer and a
creative technologist who just graduated from
Interactive Media Arts, New York University
Shanghai. Nan’s practice ranges from UX
design, algorithmic arts, and interactive
installations. She is driven by the motivation of
exploring interaction design, new media, and
arts. Offering people with delightful experiences
is her dream. Her portfolio website is here. She
is currently working as an experience designer
at HUAWEI User Experience Design
Department in Shanghai.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
131
The Dancer in the Machine
Simon Biggs
University of South Australia
Simon.Biggs@unisa.edu.au
Sue Hawksley
Samya Bagchi
Mark D. McDonnell
Independent dance artist
sue@articulareanimal.org
University of Adelaide
samya.bagchi@adelaide.edu.au
University of South Australia
Mark.McDonnell@unisa.edu.au
Abstract
The title 'The Dancer in the Machine' evokes
Gilbert Ryles critique of René Descartes mindbody dualism as the "ghost in the machine." [1]
Ryle argued that Cartesian dualism depends on
a model of the body-mind relationship that
posits the mind as a 'ghost' within, or
'puppeteer' of, the physical body. Ryle's is an
embodied concept of cognition, where agency
is considered enacted not from a central control
system but as distributed, akin to what Gregory
Bateson subsequently described as an "ecology
of mind". [2]
In the recent artistic project, “Double
Agent,” the authors have been exploring dual
modalities of agency in the moving body.
“Double Agent” employs machine-learning and
the computational representation of human
movement alongside algorithmic interaction
with, and responses to, live human movement.
[3]
Double Agent is an interactive augmented
environment where people (interactors)
physically interact with a virtual “agent” within
a large-scale, three-dimensional projection. The
“agent” is an emergent phenomenon
determined by the behavior of numerous small
invisible, virtual elements that are both drawn
to and repelled by the movement of human
bodies in the installation space. The “agent” is
formed from the totality of this behavior as a
complex three-dimensional visual structure that
is both tensile and fluid. Interaction with the
“agent” encourages exploration by interactors
132
of the system's tensional polarity and the sense
of physical extension it allows.
A novel innovation in Double Agent,
developed through a collaboration between
artist Simon Biggs, computer scientists Mark
McDonnell and Samya Bagchi, and dance
artists Sue Hawksley and Tammy Arjona, is a
software agent embedded within the system
that has learned how to dance.
Fig 1. Double Agent, 2018, Simon Biggs, interactive
installation, Museum of Discovery, Adelaide, Australia.
The title Double Agent evokes the two-fold
agency of the work, wherein a computationally
generated agent interacts with a live interactor
whilst another computationally generated agent
simultaneously 'dances' based on what it has
learned. Employing over 8 hours of recorded
dance data, acquired through the live motioncapture of two dancers improvising within the
work, the software agent has learned to
improvise dance movements in response to the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Dancer in the Machine. Simon Biggs, Sue Hawksley, Samya Bagchi, Mark D. McDonnell
live actions of interactors. The software agent
moves in ways similar to the dancers but also
possesses a host of novel moves. This novelty
could be considered a form of creative agency
emergent from the machine-learning process.
Double Agent employs a Long Short-Term
Memory Recurrent Neural Network (LSTMRNN). [4] LSTM-RNNs allow computational
systems to evolve models of complex behavior
in an unsupervised manner, without reference
to pre-existing datasets. The system learns by
identifying patterns in the data in what could be
conceived of as an idealized non-verbal or nonlinguistic experiential framework. Such
computational systems can acquire the capacity
to generate novel data-sets that follow similar
patterns; in the case of Double Agent,
humanoid movement data replicating similar,
but not identical, behavior as found in the
original motion-capture data.
In Double Agent, we witness the emergence
of a software generated co-interactor, that
cohabits a virtual installation space with human
interactors, contributing to the collective
construction and experience of the work. This
software agent is not unaware of its immediate
environment. The agent monitors the activity of
human interactors and conditions its own
behavior in response, as an inverse correlate:
the more active the human interactors the less
active the software agent, and vice versa. Here
the installation, the software, computers,
sensors and interactors (both human and
computer-generated) function as a contingent
assemblage that, from moment to moment and
state to state, instantiates itself as a dynamic
heterogeneous subject. Double Agent raises
questions about the role of agency within
complex distributed systems, whether human,
machine or hybrid. In Double Agent there is no
“dancer in the machine.” The system as a
whole, including the machine and the human, is
the dancer.
References
1. Gilbert Ryle, The Concept of Mind (London:
Hutchinson, 1949).
2. Gregory Bateson, Steps to an Ecology of
Mind (Chicago: University of Chicago Press,
1972).
3. Simon Biggs, Double Agent (Adelaide:
http://littlepig.org.uk/installations/doubleagent/i
ndex.htm, 2018), accessed July 20, 2018.
4. Sepp Hochreiter & Jurgen Schmidhuber,
Long Short-Term Memory, in Neural
Computation 9, no 8 (1997).
Biographies
Simon Biggs (b. 1957) is a media artist, writer
and curator. His work has been widely
presented in international exhibitions and
festivals and he has spoken at numerous
conferences and universities. Publications
include Remediating the Social (ed, 2012),
Autopoeisis (2004), Great Wall of China
(1999), Halo (1998), Magnet (1997) and Book
of Shadows (1996). He is Professor of Art at
the University of South Australia and Honorary
Professor at the University of Edinburgh.
http://www.littlepig.org.uk
Sue Hawksley (b. 1964) is an independent
dance artist and artistic director of articulate
animal. Her practice is concerned with
embodiment, presence, improvisation, ecology,
and technology. Her work has been presented
in theatres, galleries and academic contexts
internationally. Sue has previously performed
with Rambert Dance, Mantis, Scottish Ballet
and Philippe Genty. She holds a PhD from the
University
of
Edinburgh.
http://www.
articulateanimal.org
Mark McDonnell (b. 1975) is Associate
Professor and Director of the Computational
Learning Systems Laboratory at the University
of South Australia. His interests lie at the
intersection of data science, electronic
engineering and neuroscience, including
machine learning applied to computer vision,
autonomous decision making, and sequence
recognition and the computational and
mathematical modeling of learning in the brain.
Samya Bagchi (b. 1989) is currently
completing his PhD in Computer Science at the
Adelaide University. His research interests are
in deep-spiking neural networks and eventdriven computing. Prior to this Samya has been
an entrepreneur and worked with Siemens
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
133
Part III. Artistic project abstracts
Research after receiving his M.Tech in I.T.
from IIIT Bangalore in 2013.
134
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
“I’m Evolving into a Box:” The Paradoxical Condition in AI.
Wei-Yu Chen
The Department of New Media Art, Taipei National University of the Arts
Email: fredy0219@gmail.com
Abstract
“I’m evolving into a box” is an iron box with an
irrelevant mechanical arm and brain. The
mechanical arm has been manufactured with
aluminium pipes and two servo motor. Using
Raspberry pi as brain which runs NEAT
algorithm in real time, the iron box, just like a
newborn life, learns how to use the unknown arm.
The whole exhibition period demonstrates how
artificial intelligence drives daily objects.
Through this process, the installation transmits a
paradoxical condition.
Video Link : https://youtu.be/P6GfyQsixwE
Introduction
The main discussion of this artwork is about the
transition period in the evolution of artificial
intelligence. What can we see in this transition
period?
Nowadays, lots of artificial intelligence
products have been created. Many incredible
research projects and developments, like
developed by DeepMind, robot dogs by Boston
Dynamics, and other great emerging
technologies have shown us some new
dimensions of technology. However, they are
not the final goal of artificial intelligence. They
are simply a transitional period in artificial
intelligence evolution.
What can we see beyond those technologies?
Engineers try to make things work like
biological entities, but it doesn’t seem to be so
simple. Actually, engineers sometimes create
artifacts with a status that exists between
biological and non-biological and this status can
feel strange.
According to the “Chinese Room Argument,”
the logic of science and technology is
contradictory when it comes to a final
hypothesis about AI. [1] Therefore, I try to
demonstrate this subtle status in my work. When
I created this work, I was wondering what would
happen if an artificial intelligence algorithm was
installed into a lifeless object? After simulating
what it would look like, the answer was
unimaginable, and that is why the algorithm
works. The end result was that I combined an
iron box and a machine learning algorithm:
training a box to act like a box (Fig. 1)
Fig. 1 I'm evolving into a box., 2017,Wei-Yu Chen,
Copyright © 2017 Wei-Yu Chen All rights reserved.
Machine Learning Algorithm
There have been hundreds of types of machine
learning algorithms. In this work, I try to find the
algorithm that is closest to the theme of
“biological evolution,” rather than mathematical
feasibility. What I was searching for in this
algorithm is the essence of complex operation.
Neuro Evolution of Augmenting Topologies
(NEAT) algorithm is composed of a genetic
algorithm and a neural network (NN) algorithm.
[2] In the original NN, the neurons are fully
connected, and compute in a single topology.
The genetic algorithm imitates the concept of
cell evolution, like crossover, reproduction and
mutation, trying to keep better genes. As the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
135
Part III. Artistic project abstracts
combination of the above two algorithms,
NEAT algorithm considers multiple NN
topologies as genomes. Through crossover,
reproduction, mutation in each generation, those
well-behaved topology will keep evolving.
System Architecture
The system was constructed on Raspberry Pi,
and NEAT algorithm was implemented by
Python. The outputs and inputs for the machine
learning algorithm were the rotation angle of
two servos and the distance of installation
movement calculated by two rotary encoders
(See Fig. 2).
foundation of the algorithm, he seeks to delve
into the essence of technology and attempts to
find some subtle phenomena within it. He uses
human–computer interaction and creative
coding to intervene in daily space, in order to
explore imaginations of the future in everyday
reality.
Fig 2. The system is constructed on Raspberry Pi
References
1. Searle, John. R., “Minds, brains, and
programs”, Behavioral and Brain Sciences 3, no.
3
(1980):
417-457.
2. Kenneth O. Stanley and Risto Miikkulainen,
“Evolving
Neural
Networks
through
Augmenting
Topologies,”
Evolutionary
Computation, 10, no. 2 (2002): 99-127.
Biography
Wei-Yu Chen was born in 1993 in Taipei,
Taiwan. His artworks derive from the
exploration of Computer Science and
Engineering, and focus on the contradictory
situation of how technology affects the human
environment. Extending the theory and
136
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Volumetric Black
Triton Mobley
University of Southern California, Media Arts + Practice
triton.mobley@gmail.com
Abstract
I have given considerable thought to the
image of black bodies in cinema and their
reproductions across digital media in general.
I am specifically concerned with the digital
manifestation of the varying shades and skin
tones of blackness represented in everyday
life and whether cinematic productions, and
the technological apparatus employed to
encode the images, offer an accurate
representational portrayal. This research
series reimagines the question and challenges
the trite charge of oversimplification made
against those who claim that technology is
inherently biased. And in this case, I am
referring specifically to racial bias. Instead, it
asks whether technology has inherited its bias
from the homogenous workforce that created
and uses it.
Coded #000000 [Black] is a java
programmed image processor built in
Processing. The code analyses video pixels
retrieving the W3C's established seventeen
colors of hexadecimal browns, magnifying
the pixels by appending them within the
frame of the video for closer comparison. The
program is in its third working iteration
currently analyzing a 26-minute video of
prerecorded black and brown skin tones.
The Coded #000000 project developed from
a series of inquiries stemming from my
dissertation
research
that
examines
socioeconomic disparities and racial
representations on both the front and back–
ends of technology. Created as a practical
exercise for furthering theoretical research,
Coded #000000 is part of a wider research
project, Volumetric Black, which imagines
digital representations of equity both as a
speculative history and a technologically
obtainable future. This research investigates
the history of the chemical, mechanical, and
digital productions of black skin tones in
cinema and digital media, reexamining the
histories of media–technological bias and
discrimination.
The fourth iteration of Coded #000000
[Black] currently in progress, will digitally
reimagine an episode of Friends, the popular
1990’s American television program, with an
all “#000000” cast. This project functions as
part of an interventionist practice continuum
that aims to foster new conversations on the
future of computational media design and
digital technology. These are the questions,
as researchers and technologists we must not
only ask ourselves, but thoughtfully respond
to: Will the digital media future have
diversity coded into its systems? Can we
debug existing digital systems and platforms
of their inheritance and perpetuation of
socioeconomic and racial bias?
Volumetric Black
Is it too forward thinking to imagine a
mediated experience that could represent and
telecast the black body in its fullness?
Is it a futile exercise to wonder what the
dominant visual culture and its corresponding
industry could have been if black people had
had the opportunities to define the production
of cinematic experiences? To have been
present in the process of the mixing of
chemicals that brought us the product of
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
137
Part III. Artistic project abstracts
celluloid that we know today and its
photosensitive reactions to light. Could we
envision the birth of a material substrate
developed in place of, or simultaneously
alongside photosensitive film stocks, a
celluloid that embellishes the representation
of black bodies in low light? A material
substrate that distinguishes between the many
hues of blackness and registers them as seen
in true life. What effect would an alternate
history of this magnitude have on the way we
see ourselves? I can’t stop imagining the
endless possibilities of a visual richness that
could have been.
“When I meet a German or a Russian
speaking bad French I try to indicate through
gestures the information he is asking for, but
in doing so I am careful not to forget that he
has a language of his own, a country, and that
perhaps he is a lawyer or an engineer back
home. Whatever the case, he is a foreigner
with different standards. There is nothing
comparable when it comes to the black man.
He has no culture, no civilization, and no
“long historical past.”—Whether he likes it
or not, the black man has to wear the livery
the white man has fabricated for him.” [1]
This is by no means a suggestion to reduce
the vibrant visual culture that has been
established by the black diaspora. Not at all.
This is a speculative desire to see what the
diasporic aesthetic might have been if it
wasn’t largely predicated on the remixing
and improvisations of the fragments from a
euro-aesthetic left behind in the new world.
Post-Cinematic Blackness
In the early 1900’s Oscar Micheaux, a
filmmaker and producer of “Race Films”,
created and operated the Lincoln Motion
Picture Company. [2] With his eyes set on
creating black films for black audiences,
starring all black casts, Micheaux made a
name for himself throughout segregated
black communities and urban centers. He
even made a name for himself with many of
the states' film advisory boards for the overt
138
racial themes depicted in his films. Although
Micheaux was successful in constructing his
own narrative aesthetic for black
productions—the filmic resources that he
employed for these productions were the
same mechanical cameras and light-sensitive
film stocks used in this early time period,
devices that were never intended to film and
expose black bodies as they live.
In my expansion on the speculative
possibilities of a post-cinematic blackness, I
lean on Tanizaki’s In Praise of Shadows
(1933) among others. [3] This is a document
whose pointed gaze appears determined to
disrupt the gilded plumage of a western
aesthetic through a mix of cultural
sensibilities and impish impulse.
References
1. Frantz Fanon, Black Skin, White Masks
trans. Richard Philcox (New York NY:
Grove Press, 2008), 17.
2. Mary Carbine, “The Finest Outside the
Loop: Motion Picture Exhibition in
Chicago's Black Metropolis 1905–1928,”
Camera Obscura: Feminism, Culture, &
Media Studies Vol. 8 No. 2 (1990) 23.
3. Jun’ichiro Tanizaki, In Praise of Shadows,
trans. Thomas J. Harper & Edward G.
Seidensticker (Sedgwick ME: Leete’s Island
Books, 1977).
Biography
Triton Mobley is an artist, educator, and
researcher in new media. His research and
practice
studies
the
socioeconomic
disparities of emergent technologies on
marginalized
communities.
Triton’s
interventionist and guerrilla campaigns have
been exhibited at Art Basel, Miami, Boston,
New York, and Japan. For over 15 years,
Triton has been a new media educator,
advancing its technological modernization.
In 2014 he was awarded a S.T.E.A.M
education research grant to the Museo
Nazionale Scienza e Tecnologia Leonardo da
Vinci in Milan. He was recently invited to
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Volumetric Black. Triton Mobley
give artist talks at UCLA and the AADHum
Conference in Maryland. Triton received an
MFA from RISD in Digital+Media. He is
currently a PhD candidate and Annenberg
Fellow in Media Arts + Practice at USC.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
139
AIBO – Artificially Intelligent Brain Opera – An Artistic Work-inProgress Rapid Prototype
Ellen Pearlman
RISEBA University/Parsons/New School University
Abstract
Cloud-based analytic engines for emotionally
intelligent artificial intelligence like Google
API, IBM Watson, and others function through
semantic analysis of speech-to-text input. They
apply weighted values based on magnitude or
strength of an emotional statement, and score
an overall emotional analysis of the statement’s
positive, negative, or neutral qualities. These
types of analyses can also be used by both
speech to text and text to speech specialized
chatbots, and incorporated into analytic engines
tasked with making critical decisions on
customer service, healthcare, jurisprudence,
social sorting, employment, and migration
among others. DARPA and Facebook Building
8 are developing initiatives for semantic
analysis of thoughts in the brain that interact
directly with computers and other devices that
also rely on specialized types of semantic
analysis. [1][2]
This AIBO work-in-progress opera depicts a
proof of concept, initial rapid prototyped
interaction between an emotionally intelligent
artificial intelligence entity powered by
machine learning and the human brain. It
represents the sterility of algorithmic decisions
versus a sentient human being’s emotions, with
a subject’s brainwaves visible on their body
highlighting inherent tensions between implicit
mathematical analysis, and complex human
irrationality.
Rapid Prototyping Proof of Concept
Over the course of four Saturdays an Art-AHack™, rapid prototyping collaboration was
held, focusing on emotionally intelligent
artificial intelligence and EEG wireless brain
computer interfaces. [3] Two main aspects of
140
the AIBO were developed. The first, written in
Python software translated a person’s speech
into text that underwent emotional semantic
analysis in the Google Cloud API, returning
values of magnitude and score. Emotional
sentiment analysis looks at all the input text in
a sentence and decides the strongest emotion in
order to determine if it is positive, negative or
neutral. It does not indicate subtle differences
between an emotion like “happy” and “joyful,”
determining both to be “positive.” Neutral
scores are texts with low emotion, or conflicted
emotions that cancel out their respective
weighted values resulting in a reading of 0.
Magnitude is defined as the strength of an
emotion, either positive or negative between
0.0 and +infinity that is not normalized. Score
is then calculated as the overall emotion of a
statement, positive or negative. [4]
Fig 1. AIBO Proof of Concept, 2018, Brainwave headset/LED
bodysuit performance demonstrating the relationship between
the brainwave of attention and a positive emotional sentiment
analysis AI. Copyright Ellen Pearlman
For the proof of concept performance a subject
wore a NeuroSky brainwave headset, and a
LED display of an oversized necklace hooked
up to an Arduino, which received data from a
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
AIBO – Artificially Intelligent Brain Opera – An Artistic Work-in-Progress Rapid Prototype. Ellen Pearlman
NeuroSky headset. Simple questions were
asked about feelings such as “What is
something you hate?” or “What is something
you love?” The verbal response lit up with the
colors aqua for the brainwave of meditation,
and magenta for attention. Concurrently the
speech-to-text semantic analysis function
analyzed the reply, and it was projected as a
java script graphic also connected to their
brainwaves. The projected graphic displayed
attention as a magenta lattice and meditation as
an aqua lattice. The size of the graphic would
change according to the emotional score of the
subject’s response. A positive response would
display a large lattice. Negative scores would
display a small lattice. The change in
brainwaves and the weighting of emotional
scoring occurred simultaneously.
Fig 2. AIBO Flow Chart, 2018, brainwave headset/LED
Bodysuit, Copyright Ellen Pearlman
Conclusion
A proof-of-concept rapid prototype was built in
just four days to demonstrate the relationship
between a subject’s brainwaves consisting of
attention and meditation, and an analysis of an
emotionally intelligent artificial intelligence
parsing of a verbal statement from speech to
text using a NeuroSky headset, an LED
necklace and the Google cloud-based API.
This prototype is the first step in developing
AIBO, an artificially intelligent emotionally
intelligent brain opera between a human being
and an algorithmic machine learning entity.
This sample demonstrates conclusively that a
further build out is possible, including a
feedback loop between various EEG brainwave
states; an artificial body of light; speech-totext, and text to speech customized
repositories; and an AI analysis in the
computing cloud.
References
1. Eliza Strickland, Director of Typing-byBrain Project Discusses How Facebook Will
Get Inside Your Head, IEEE Spectrum:
https://spectrum.ieee.org/the-human-os
/biomedical/bionics/facebooks-director-oftyping-by-brain-project-discusses-the-plan.
2. National Research Council, Emerging
Cognitive
Neuroscience
and
Related
Technologies: https://www.nap.edu/catalog
/12177/emerging-cognitive-neuroscience-andrelated-technologies.
3. Art-A-Hack, Special Edition 2018:
https://artahack.io/projects/sentimental-feelingsecond-skin/.
4.
Google,
Natural
Language:
https://cloud.google.com/natural-language
/docs/basics.
Biography
Ellen Pearlman, an Assistant Professor, Senior
Researcher at RISEBA University, Latvia and
faculty at Parsons/New School University, New
York, is a new media artist, critic, curator and
writer. She is Director of ThoughtWorks Arts,
President of Art-A-Hack ™ and Director of the
Volumetric Society of New York. This
prototype was made with the assistance of
programmers Sarah Ing, Doori Rose, Danni
Liu, and LED necklace builder Cynthia
O’Neill.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
141
Artificial Digitality
Kuldeep Gohel
American Museum of Natural History, NYC
Fidelity Investment, NJ
kuldeepgohel.com
Abstract
This paper is about the technical process and
artistic intent of a musical album co-led by a
human and A.I. The project aims to make
several compositions. The album begins with a
composition that is generated by me alone. The
compositions that follow are co authored by an
open source neural network and me. The NN is
trained by me, using the mathematical pattern
from my compositions. The album ends with a
composition that is completely generated by the
neural network. The goal of the project is to
express the rise of AI in a musical way and
speculate on the future of A.I. I use music,
mathematics, and machine learning to create a
musical story. It aims to question the future
where automation takes over human labor in
various fields including creative areas. [1]
Context
The project is driven by two forces: the love for
music composition and A.I. Despite constant
efforts to make the first music album, I have not
been able to do so because of limited time, and
the lack of collaboration and feedback. Five
years have passed by where I have constantly
evolved but a concrete output is absent.
In these five years of music learning I have
been involved in emerging technologies and art.
I then heard about A.I. and fell in love with an
idea that a machine can replicate and help me to
produce music that I am unable to give time to.
I consider machine learning as a tool to replicate
my ideas and generate the other that I can share
my soul with.
I aim to generate an album with machine
learning to make music and compositions that
are used as a medium to express the rise of A.I.
142
Along with this I aim to speculate upon the
future of A.I. and potential A.I. assistance. [2]
Process
The process involved an analytical approach to
the art of music making. I analyzed the process
of making music, then converted the process
into data; which can be used to generate a
system that will mimic my music making.
I started by dividing the music album into
three compositions. The content of each
composition draws inspiration from the story of
the development of A.I. to date. The story
involves the “World before A.I.,” “Current
World” and “Future (Singularity).” For “Current
World” and “Future (Singularity),” C# Melodic
Minor (111 bpm) and G# Hungarian Gypsy (128
bpm) scales were used to emote intelligence,
while the “World before A.I.” used C Natural
Minor to narrate Sentimental and Tragic. Each
of the compositions were generated from the
total of 15 keys offered in two octaves of their
scale.
First Composition
Composer: Only human composer
Scale and bpm: C Natural Minor (90 bpm).
Chord, Melody: The very first composition was
created by me with no help from the A.I. The
data extracted from this composition was then
used to train the three NN and get the A.I.
assisting me in the other compositions. The three
NN were assisting me with the chord sequence,
the note sequence for melody and the time
between each note in the melody. [3]
Second Composition:
Composer: Human and A.I.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Artificial Digitality. Kuldeep Gohel
Changes: Scale and bpm: Altered Scale: Csharp/D-flat melodic minor. (111 bpm).
Chord: Chords played alternatively by me and
the A.I. The first neural network assisting me
with the chord sequence.
Melody: Note Progression: First and second
note played by human, the second and third
notes played by the neural network, and so on.
Time between the note’s progression: The third
neural network is used to get the time intervals
between the notes. So the time interval between
the 1-2, 2-3 note is decided by the human; then
this data is fed to the neural network to get the
time interval between the 3-4 and 4-5. And so
on.
Third Composition:
Composer: Only A.I.
Changes: Scale and bpm: Hungarian Gypsy
Scale: TSTSTTS: G #. (128 bpm).
Chord: Feeding value of first chord played by
neural network in the neural network to suggest
the next chord. And so on.
Melody: Note Progression: Feeding the first
two notes played by neural network in the neural
network to suggest the next two notes & so on.
Time between the note’s progression:
Simultaneously with the notes progression, the
third neural network is used to get the time
intervals between the notes. So the time interval
between the 1-2, 2-3 note is decided by the
neural network and fed to the neural network to
get value of the time interval between 3-4, 4-5.
And so on.
Biography
Kuldeep Gohel is a self-taught musician,
creative technologist and a 2018 graduate of
Design and Technology (MFA) from Parsons
School of Design, NYC. This project was part of
his Master’s thesis involving Machine Learning
and Music, done under the guidance of Sven
Travis and Louisa Campbell.
Before Parsons, he did his bachelors from
NID, India in Exhibition Design, along with a
semester exchange at RMIT, Australia; where
he got major exposure to the power and various
faces of weaving art, design and technology.
He has made art and design shows in Europe,
Australia and USA since 2010. And along with
this, he has been an educator. His journey as an
educator began from HS OWL, Germany in
2015, and is currently working as a Digital
educator at AMNH, NYC. Along with this he
works at Fidelity Investment, NJ as an UX
Designer.
All Compositions
soundcloud.com/psychoactive13/sets/ad-1
Full Documentation
kuldeepgohel.com/artificial-digitality
References
1. E. Alpaydin, Machine learning: The new A.I.
(Cambridge, MA: MIT Press, 2016).
2. S.J. Russell and P. Norvig, AI: A Modern
Approach. (Boston: Pearson, 2016).
3. J. Perricone, Melody in Songwriting: Tools
and Techniques for Writing Hit Songs (Boston:
Berklee Press, 2007).
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
143
Specimens of the Globe:
Generative Sculpture in the Age of Anthropocene
SHIN, Gyung Jin
School of Creative Media, City University of Hong Kong
gjinshin@gmail.com
Abstract
Specimens of the Globe is a project that
converts statistical data collected from the
Internet into three dimensional sculptural
objects through a generative procedure and
digital fabrication. Referring to a traditional
casting method, which ironically results in
replicas slightly different from each other, I
aim to redesign a replica-making system that
creates constant differences out of the same
original version. I 1) collect the statistical data
about current global issues, including war,
violence, poverty, terrorism, famine and the
environment, from wikis or the intelligence
agencies on the Internet (e.g. Wikipedia and the
CIA’s World Factbook); 2) relate this data to a
list of key cities; and 3) digitally fabricate
geometric shapes that point to the cities on a
virtual globe; in order to 4) physically
reproduce the data with semi-transparent
material. The outcomes of the system generated
from the original, a 30cm diameter plastic
globe, are crystal-like abstract pieces of
sculpture in various geometrical shapes and
colors. By crystallizing the issues that we do
not insist on knowing about or which are
otherwise overlooked, this research-based,
interdisciplinary project aims to redefine the
concept of “sculpture” under the influence of
the post-internet environment and to question
the societal role of art in response to the age of
Anthropocene.
2. Jacques Rancière. The Politics of Aesthetics:
The Distribution of the Sensible, trans. Gabriel
Rockhill (London: Continuum, 2004).
3. The World Factbook, Central Intelligence
Agency, CIA website (the United States),
https://www.cia.gov/library/publications/resour
ces/the-world-factbook/index.html.
Biography
Gyung Jin Shin is an artist, researcher, and PhD
candidate in the School of Creative Media, City
University of Hong Kong. She received a MFA
from Columbia University in 2010 and a BFA
from Seoul National University. Her art work
has been exhibited in the US, Europe, and Asia.
Her research interest includes critical theory,
art’s social engagement, aesthetics and politics,
contemporary art and new media art, postmedia discourse, post-internet art, and media
archaeology.
References
1. Alan Dorin, et al. "A framework for
understanding
generative
art." Digital
Creativity 23, nos. 3-4 (2012): 239-259.
144
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Machine Learning for Performative Spaces
Alex Davies, Brad Miller, Boris Bagattini
UNSW Australia, Art & Design
alex.davies@unsw.edu.au, brad.miller@unsw.edu.au, boris@soma-cg.com
Abstract
This paper discusses the development of a large
scale permanent public interactive media
platform situated in Southport, Queensland,
Australia, and specifically, how machine
learning has been implemented to enhance and
co-create the delivery of live performance
presentations by artists at the site.
The media façade is located at the Telstra
Network Exchange in Southport Queensland at
a busy public intersection. It comprises of 8
audio-visual displays, live camera inputs,
computer vision hardware and LED lighting.
The project aims to create a playful activated
urban space and provide circumstances and
infrastructure to foster and support live
performance in the city. [1][2][3]
Fig 1. Telstra Interactive Hub Southport 2018, Concept
Drawing.
To this end, we see the media façade as an
interactive hub designed to encompass several
modes of operation including interactive games,
embodied music composition tools, and a
performance mode in which the hub acts as a
sophisticated electronically mediated stage that
offers street performers and buskers a dynamic
lighting and visual accompaniment to their
shows.
The design approach was to consider these as
distinct goals. Firstly, the design of the space
created a flexible platform for all interaction
modes via transparent so-called Natural User
Interfaces including 14 camera’s for computer
vision and image acquisition, and 8 distributed
microphones. This hardware array supports rich
acquisition of overlapping data at depth. [4] [5]
Secondly, machine learning was used as a
way to address the challenge of creating a
system that coherently supports the activities of
a wide spectrum of unknown future performers
utilizing the site. Rather than a generalist
approach to creating visual and lighting content,
machine learning was chosen to tailor the
lighting and video content to the specific
characteristics of the individual performer, and
as the interactive hub is a permanent public art
work, the more performers the work is exposed
to over time, the more sophisticated and refined
the system will become. [7] [8] [9]
Live
performance
mode
uses
an
implementation of TensorFlow within the
Touchdesigner software environment to classify
and choose procedural parameters that drive a
generative visual and lighting environment that
is displayed on 8 screens and a lighting system
that spans the 19 meter ‘stage’ area. The system
has been initially trained prior to the launch
through the creation of a library based upon
audio of buskers and street performers from
YouTube. This data was gathered to create a
base library of genres to construct the
architecture of the system. Following this, all
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
145
Part III. Artistic project abstracts
live performances on site will be recorded and
converted into soundprints. Once the
performance has been labeled, the Tensorflow is
updated to extend its knowledge of a current
category or to integrate a new category. In this
way each performance improves the system
creating a tailored reactive installation that
continually improves over time.
References
1. Luke Hespanhol and Martin Tomitsch,
“Strategies for Intuitive Interaction in Public
Urban Spaces.” Interacting with Computers.
doi: 10.1093/iwc/iwu051, 2015.
[2] Luke Hespanhol, Martin Tomitsch, Kazjon
Grace,
Anthony
Collins,
Judy Kay,
“Investigating intuitiveness and effectiveness of
gestures for free spatial interaction with large
displays” PerDis '12 Proceedings of the 2012
International Symposium on Pervasive
Displays, Article No. 6.
[3] Niels Wouters, John Downs, Mitchell
Harrop, Travis Cox, Eduardo Oliveira, Sarah
Webber, Frank Vetere, Andrew Vande Moere
“Uncovering the Honeypot Effect: How
Audiences Engage with Public Interactive
Systems.” Proceedings of the 2016 Conference
on Designing Interactive Systems (DIS '16).
[4] Joerg Muller, Robert Walter, Gilles Bailly,
Michael Nischt, Florian, “Looking Glass: A
Field Study on Noticing Interactivity of a Shop
Window,” Alt. CHI’12, May 5–10, 2012.
[5] Jörg Müller, Dieter Eberle, Konrad Tollmar,
“Communiplay: a field study of a public display
mediaspace,” CHI'14 Proceedings of the
SIGCHI Conference on Human Factors in
Computing Systems, 1415-1424.
participatory urban media architecture, software
development and expanded photography.
Boris Bagattini is an Artist and Programmer.
He has directed and led visual effects teams on
a myriad of film, TVC and broadcast projects.
Since 2011 Boris has been working primarily in
large and small scale theatre, projection
mapping, event video, live television and
interactive artworks. In the last two years he has
been engaged as Screen Graphics and InCamera Interactives Programmer for Ridley
Scott’s Alien Covenant, Guillermo Del Toro’s
Pacific Rim Uprising, and the DC Comics
production of Aquaman.
Biographies
Alex Davies is a media artist and Scientia
Fellow at the SW, Australia Creative Robotics
Lab. His practice spans a diverse range of media
and experiments with interaction, technology,
perception, mixed reality and illusion.
Brad Miller is a visual artist, curator and
academic who works with technology and
networks to create moving pictures and largescale interactive installations about memory and
time in an exploration of identity. His artistic
practice bridges the fields of media arts,
146
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Penelope
Alejandro Albornoz
University of Sheffield
aalbornozrojas@sheffield.ac.uk
Roderick Coover
Scott Rettberg
Temple University
roderick.coover@temple.edu
University of Bergen
scott.rettberg@uib.no
Abstract
Penelope is a combinatory sonnet generator
film based on Homer’s The Odyssey that
addresses themes of longing, mass extinction,
and migration, which are not simply relegated
to the past. [1] Re-combinations of lines of the
poem, video clips, and musical compositions
produce a different version of the project on
each run. Penelope was co-produced by
Alejandro Albornoz (Sound), Roderick
Coover (Video), and Scott Rettberg (Text and
Code). Other contributors to the project
include Kristiansand Symphony Orchestra
oboist Marion Walker, voice actress Heather
Morgan, and actors Helen Amourgi, Kostas
Annikas Deftereos, and Sophia Kagadis in
non-speaking roles. The video and the text
were developed by Coover and Rettberg
during 2017 residencies at the Ionian Center
for Arts and Culture in Kefalonia, Greece.
Kefalonia is reputedly the historic home of
Homer.
situation of the narrative is that of Odysseus’s
wife Penelope from Homer’s epic, left behind
on Ithaca for many years when Odysseus
went off to fight in the Trojan wars and
struggled to return. While Odysseus is off on
his heroic adventures, Penelope must struggle
to fend off the advances of a band of parasitic
suitors vying for her attentions, hand, and
Odysseus’s throne. She distracts these suitors
through subterfuge, delaying the arrival of the
day when she will be forced to choose another
to replace Odysseus, even as she struggles to
believe that he will in fact return to rule at her
side. Penelope is able to delay the decision of
choosing a new mate by making them wait
before competing for her hand until she has
finished weaving a tapestry. Each day she can
be seen working to complete it, but each night
she returns to the loom to unweave the
threads from the day before. Although it is
set within a particular Homeric frame, the
human concerns and emotions involved in
Penelope’s story are essentially universal
ones of longing for loved ones, doubts for the
future, struggle, loss, and perseverance in the
face of adversity. These are themes which
apply equally well in contemplation of
contemporary struggles with catastrophic
climate change, extinction, and mass
migration. Penelope filters Penelope’s story
from the epic through the form of the
Shakespearean sonnet. Pulling from a
database of ten-syllable lines primarily
written in iambic pentameter, the computercode-driven comb-inatory film can produce
millions of variations of a sonnet that weaves
and then unweaves itself. The program writes
13 lines of a sonnet and then reverses the
rhyme scheme at the center couplet. The
Fig 1. Penelope, 2018, image © 2018 by Roderick
Coover, CRchange.
The Combinatory Poetics of Penelope
Penelope engages with ancient narratives and
poetic forms, and contemporary technology
and poetic methodologies. The central
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
147
Part III. Artistic project abstracts
program thus produces Shakespearean sonnets
that weave and then unweave themselves
according to the same rhyme scheme,
resulting in a 26-line poem.
Penelope’s generativity is not based on the
operations of a complex AI or neural network,
but instead hearkens back to early forms of
combinatory poetics. The algorithms here are
not generating the lines from scratch or building
them on the basis of machine learning, but
instead are recombining texts and media
elements in an aleatory but formally structured
manner. An important inspiration for Penelope
is Oulipian writer Raymond Queneau’s Cent
mille milliards de poèmes (One Hundred
Thousand Billion Poems), a book of ten pages
of a 14-line sonnet, with each line cut as a strip,
so that the reader could substitute a line in any
given position of the poem and still read a
sonnet that worked metrically and semantically,
resulting in 1014 poems. [2] Penelope is
similarly factorial, if using a slightly more
complex algorithm that results in a more varied
end-rhyme scheme in successive runs of the
work. Penelope is programmed to produce
three 26-line iterations of the combinatory
sonnet without repeating a line. The system
produces each sonnet as an audiovisual
composition before printing it to the screen.
Combinatory Sonnet, Film, and Score
Penelope not only generates combinatory
sonnets but also recombines videos by Roderick
Coover and the sound compositions by
Alejandro Albornoz in a parallel algorithmic
structure. Borrowing from traditions in avantgarde cinema and digital musical composition as
well as experimental writing practice, the
collaborative project thus brings three strands of
practice together in one protean digital work.
Imagery
The imagery for Penelope was filmed in and
around islands of the Ionian Sea. The
cinematography and art direction follow two
primary themes. Images from the natural
landscape evoke ancient and enduring elements
of the Odyssey's sensorium, tying the present to
the past in a cyclic expression of time. This
148
includes human relationships to the land,
weaving, storytelling, olives, seafaring and
goats described by Homer that continue today.
Other images illustrate human use and abuse of
the natural landscape, recasting enduring
poetics in relation to contemporary crises of
environmental destruction, waste, and mass
extinction. Loss and memory in collective
consciousness is also expressed through visual
forms of underwater imagery of Roman
shipwrecks, above ground images of earthquake
destruction and ancient open tombs.
Generative Audio Composition
The sound composition was addressed under the
procedures of acousmatic music tradition, which
in turn continues the aesthetic guidelines and
techniques of musique concrète; this
background involves the use of collage
techniques to create sound structures and
discourses using pre-recorded materials which
are usually subjected to various transformations.
Starting from some recordings of oboe
improvisations performed by Marion Walker
alongside other sources, the resulting materials
were 80 acousmatic miniatures with a duration
of 20 seconds each, and 10 transitions. All these
small compositions are subsequently combined
by the algorithms in the same way as the texts
and video clips. Each audio clip was
individually composed to create a balance
between diversity and coherent unity; this
produces a unified sonic environment and at
the same time provides contrast between the
clips.
References
1. Homer, The Odyssey, trans. Robert
Fitzgerald. (New York: FSG, 1998).
2. Raymond Queneau, Cent mille milliards de
poèmes (Paris: Gallimard, 1961).
Biographies
Alejandro Albornoz is a Ph.D. candidate in the
Dept. of Music at the University of Sheffield.
Roderick Coover is a Professor of Film and
Media Arts at Temple University.
Scott Rettberg is a Professor of Digital Culture
at the University of Bergen.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
“Hypomnesia,” Game of Memory
Li Wanqi Anna
Guan Jian Focus
anna_22_li@hotmail.com
jguan0525@gmail.com
Abstract
Hypomnesia is created in Blender with
Neurosky Brainwave sensor. By obtaining the
data of one’s attention level, in this project,
participants try to visualize the abstract
experience of reminiscence. Viewers are
allowed
to
“intrude”
into
memory;
simultaneously, there are possibilities of their
memories being distorted without our
consciously knowing it.
Concept
Human memory is a topic we had grown
greatly interested in. Collaborating with
department of psychology, City University of
Hong Kong, we learned some interesting facts
about human memories, which led to some
critical thinking towards the disease
Hypomnesia. Hypomnesia is a disease of
having an abnormally poor memory of the past.
Decay of memory, with any doubt, is
fearful and dreadful. By reading some of the
chapters in Professor Robert A. Bjork's
“Successful remembering
and
successful
forgetting,” we learned that actually, lost
memories can live again. [1] That is to
say, things that we have no conscious memory
of still live in our minds, waiting to be woken.
However, when we try to recall them, and
actively try to reconstruct the past, it seems that
we have the possibility to create the stories by
choosing which memories to recall. Is the fact of
the matter that even as we try as hard as we can
to bring something that happened long ago back
to our minds, nevertheless, our brain might
have already altered
it
based
on
our subconscious preferences? That's why we
came up with the idea to visualize the abstract
experience of reminiscence. To recall the loss of
the collective memory of Hong Kong people, we
decided to use the traditional buildings of Hong
Kong as the scenes of our game. Collective
memory, as a kind of cohesive power of the
society, acts as common beliefs and shared
moral attitudes of the public. [2] Collective
memory can be related to many things like
experience, images, texts, etc. However,
buildings are always the most conspicuous
scenes to bear collective memory. [3] We didn’t
realize the importance of old buildings until they
had been demolished. As the development and
reconstructions of the modern cities, our
collective memories are fading along with the
ancient buildings: it is a collective Hypomnesia.
Overview
We started to do photo scanning of the old
buildings which gave us a sense of Hong Kong
déjà vu: temples, outdated wagons, old seafood
restaurants, scruffy cabins, etc. Then we did
modelling in Blender to bring these fragments
of memory together into a nonexistent old
village. By getting the data of the attention level
through Neurosky Brainwave sensor, we want to
intimate the experience of “thinking hard,”
since this is what we all do when we want to
recall something.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
149
Part III. Artistic project abstracts
Fig 1. Hypomnesia, 2017, Li Wanqi / Guan Jian, Interactive
game installation.
Wearing the brainwave sensor, the viewers
became the “intruders” into memory. However,
remember that it’s definitely not a trivial matter
to explore the surroundings. In order to penetrate
the obscurity, people need to concentrate to find
out what’s in front of them. Through this process
of discovery, some of the viewers may feel the
environment to be strange, while the others may
find it familiar. If they recognize it, what kind of
memories will be jogged?
We have finished our prototype in the form of
an interactive game installation. Believing in the
potential of this project, we will continue to
research in the fields of human memory and
Hong Kong historical architectures. At the same
time, we are also thinking about the possibilities
of applying VR technologies to this game. With
a more immersive environment, the experience
could be much more impressive and vivid.
Project’s link
https://www.annaliwanqi.com/hypomnesia
References
1. Robert. A. Bjork, Successful Remembering
and Successful Forgetting (New York:
Psychology Press, 2011).
2. Maurice Halbwachs, On Collective Memory
(Chicago:University of Chicago Press, 1992).
3. Walter Benjamin, “The Art of Work in the
Age
of
Mechanical
Reproduction,”
Illuminations (New York: Schocken Books,
1968).
150
Biographies
Li Wanqi, Anna received an art education in
piano performing and dancing from childhood
that enriched her life path and made her an
imaginative and observant person. Through
careful observation, a desire for self-expression
was aroused, sometimes emotional, sometimes
critical, which all turned into her motivations of
creating films and other works, including
instruments, installations and performances.
During her study at the School of Creative
Media, except for making great efforts in
reaching proficiency in techniques of
cinematography, software and hardware, she
also learned not to conform to stereotypes and
enhanced her independent critical thinking,
reflected in works, which somehow also gave a
hint of her personality, being vivacious and
playful, and at the same time, interactive and
thought-provoking. Recently, human memories
and urban studies serve as inspirations and play
crucial roles in her projects. Abstract emotions
and thoughts were conveyed mainly through the
forms of documentaries, installations and
multimedia performances.
Guan Jian, Focus, arrived in Hong Kong in 2010
and set to study and work in the field of media
and art. Bachelor degree study in media and
communication gave him a deep sense of social
study and research. He started to be interested in
finding the underlying causes of the surface
phenomenon. By traveling more than 50 places
in 30 countries all over the world, he made a
number of documentaries and also gained
interesting ideas that can be applied into future
new media projects. After working as a multitask videography producer in a local media
company for 2 years, he decided to engage in
advanced studies in creative media to improve
himself both technically and conceptually in the
field of new media. During and after the study,
he has created and participated in various new
media projects, mostly films and installations.
His experience of studying and working in both
traditional and new media fields help him better
understand “the old” and “the new.”
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Up-Close Experiences with Robots
Louis-Philippe Demers
School of Art, Design and Media, Nanyang Technological University
lpdemers@ntu.edu.sg
Abstract
This paper reports on singular encounters with
robots in the context of artistic explorations.
These artworks investigate the vast dimensions
of the human-robot interaction: multiple layers
of embodiments, mechanisms of identification
and empathy, thaumaturgical and dramaturgical
techniques and morphological computing.
Several case studies are reported to explore
their potential impact in Social Robotics and to
develop alternate human-robot scenarios.
Introduction
This paper pinpoints a non-exhaustive list of
concepts and perceptual observations about upclose experiences. A major common thread of
all these robots is that they do not utilize
human spoken language and rarely any facial
expressions. The limitation of this non-verbal
interaction means robot agency is located in the
successful embodiment of intent and actions.
Hence, the context of the scenario and mise-enscène are key to the experience. In contrast to
social robotics where researchers strive to
define models for the functionality of a robot, I
aim to bring together the real and the unreal,
fact and fiction and as Jean Cocteau suggests
something “not to be admired, but to be
believed.” In this sense, Up-Close Robots are
about how to make unbelievable agents,
believable. The most recent lineage of projects
deal with what I would describe as more radical
experiences and encounters, where the coexistence of the robot in the shared space with
the
human
addresses
intimate
and
uncomfortable body proxemics.
Projects
La Cour des Miracles (1997, 2012)
Staging robotic misery, the many layers of
embodiment (from the physiological to the
social) trigger viewers’ own bodily reception
and encourage them to consider these
characters not as objects that mechanically
reproduce signs of pain but as bodies that
actually experience pain. [1]
Fig 1. La Cour des Miracles, 2012, Demers/Vorn, ©Kennedy
Devolution (2006), XLimbs (2017)
These projects engage the audience into
imaginative alterations of our original bodyschema. [2] In turn, these robotic wearables for
stage performers lead to transformed motions
and revised stage presence. Exploring empathic
reactions, the viewers are gazing upon these
unprecedented bodily sensations felt by the
performers. Stemming from scientific research
on supernumerary limbs and adapted to the
dramaturgical needs of the performance, the
machine extension becomes a variation of the
object “human dancer.” [3] Being corporeal, it
becomes a factual variation of the body.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
151
Part III. Artistic project abstracts
paradoxical sense of pleasure emerges through
this transformed corporeal experience of
coerced movements. [5]
Fig 2. XLimbs, 2017, wearable robotics, © Demers.
The Blind Robot (2012)
This project empowers the qualia of being
touched by a robot in what is for most
participants the very first time. It enables the
audience to take part in a sensual experience, as
opposed to one of solving the intellectual,
ontological issues of the quasi-living. This
scenario incarnates the pivotal role of ‘nascent
movements’ in our bodies and also deals with
the perception of intentionality. My analysis of
the Blind Robot demonstrates the suggestive
power of the afflicted agent. [4]
Fig 4. Inferno, 2015, Demers/Vorn, exoskeletons, © Gridspace.
I Like Robots, Robots Like Me (2018)
The radical alterity and the perceived
‘humanness’ of the animal serve as a platform
to depart from the expected behaviours of
(social) robots and the anthropocentric dialogue
imposed on them. Central to this project is the
parallel we can establish between the
boundaries of human-machine and the humananimal. This process is imploding by
simultaneous confusing and reasserting the
human/non-human (species) boundaries. The
visitors are tracked with physiological sensors.
With this information, the robot knows if the
visitor is afraid or at rest, asserts where to
charge or to flee, or when to stop or stand still.
Fig 3. The Blind Robot, 2012, robotic arms, © Demers.
Inferno (2015)
Inferno is a participative robotic performance
project rooted in the ambiguity of control.
Playfully framed as a representation of Hell,
Inferno offers an intimate experience with
exoskeleton technologies and highlights the
contradictions found in humans becoming
cyborg. Exoskeletons are retrofitted on
untrained audience members cum performers.
This select group of the public becomes an
active part of the performance, giving a radical
instance of immersive and participative
experience.
The
human
subject
is
simultaneously master and slave, agent and
object, in this transgressive assemblage. A
152
Fig 5. I Like Robots, Robots Like Me, 2018, UAV, © Demers
References
1. L-P Demers, “The Multiple Bodies of a
Machine Performer,” Robots and Art (Springer,
2016), 273-306.
2. V. Gallese, “The roots of empathy: The
shared manifold hypothesis and the neural basis
of intersubjectivity,” Psychopathology 36, no. 4
(2003): 171-180.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Up-Close Experiences with Robots. Louis-Philippe Demers
3. Mason Bretan, et al, “A Robotic Prosthesis
for an Amputee Drummer” (2016).
4. L-P. Demers, Machine Performers: Agents
in a Multiple Ontological State (2015).
5. E. A. Jochum, L-P. Demers, and E. Vlachos,
“Becoming Cyborg: Corporeal Empathy,
Agency and Politics of Participation in Robot
Performance,” EVA-Copenhagen (2018).
Biography
Demers makes large-scale installations and
performances that can be found in theatre,
opera, subway stations, art museums, science
museums, concerts and trade shows. He has
built more than 375 machines and his works
have been featured at major venues such as
Theatre de la Ville, Lille 2004, Expo 1992 and
2000, Sonambiente, ISEA, Siggraph and Sonar.
He received six mentions and one distinction at
Ars Electronica, three prizes at VIDA,
recommendations at JMAF and six prizes for
Devolution including two Helpmann Awards.
Demers was Professor of Scenography at the
HfG Karlsruhe, affiliated to the renowned
ZKM. Since 2006, he joined the newly founded
School of Art, Design and Media at the
Nanyang Technological University.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
153
Membrane or How to Produce Algorithmic Fiction
Ursula Damm
Peter Serocka
Bauhaus University Weimar
ursula.damm@uni-weimar.de
pserocka@math.uni-bielefeld.de
Algorithmic Precedents in my Oeuvre
Membrane is an art installation to be exhibited
in Berlin next spring. It builds on a series of
generative video installations with real time
video input. [1][2]
audience can interfere with the temporal
alterations of the image by an interface.
Membrane allows the viewer to interact
directly with the generation of the image, as it
was tested in Chromatographic Ballads. [3]
Technical conception of Membrane
On a technical level, Membrane controls image
“features” which are learnt, remembered and
reassembled. The characteristics of the features
are delegated to a neural network. TGANs
(Temporal Generative Adversarial Nets)
implement “unsupervised learning” through the
opposing feedback effect of two subnetworks.
A generator produces short sequences of
images and a discriminator evaluates the
artificially produced footage. [4]
Our algorithm allows us to “invent” images
in a more radical manner than classical
machine learning would allow. The installation
shows images from unchanged street views to
purely abstract images, based on the found
features of the footage.
Fig 2. Chromatographic Ballads [3], explaining the interface
2013 Damm/Schneider
Fig 3. First animated video features for Membrane 2018
Damm/Serocka,
This setting allows to experience the
‘imagination’ of the computer according to
curiosity and personal preferences. Membrane
operates on images derived from a static video
camera observing a street scene in Berlin. Our
Algorithmic Fiction
The fictional potential of machine learning has
become popular through Google’s deep-dream
algorithms. From an aesthetic perspective,
these images look paranoid; they tail off in
Fig 1. Transits 2012 [2] Damm, Screenprint
154
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Membrane or How to Produce Algorithmic Fiction. Ursula Damm, Peter Serocka
formal details and reproduced previously found
artefacts (through searching the internet).
From an artistic point of view, the question
now arises, how can something original and
new be created with algorithms? This is the
question behind the software design of
Membrane. Unlike Google’s deep-dream
algorithms and images, we don’t want to
identify something specific within the video
footage (like people or cars). Our software
exposes the visitors to intentionally vague
features: edges, lines, colours, geometrical
primitives, movement. Interestingly, the
resulting
images
resemble
pictorial
developments
of
classical
modernism
(progressing abstraction on the basis of formal
aspects) and repeat artistic styles like
Pointilism, Cubism and Tachism in a uniquely
unintentional way. These styles fragmented the
perceived as part of the pictorial transformation
into individual sensory impressions. Motifs are
now becoming features of previously processed
items and are successively losing their relation
to reality. Are these fragmentations of
cognition proceeding in an arbitrary way or are
there other concepts of artistic abstraction and
imagery ahead of us?
http://ursuladamm.de/nco-neuralchromatographic-orchestra-2012/.
4. Masaki Saito, Eiichi Matsumoto, Shuta
Saito, Temporal Generative Adversarial Nets,
ICCV 2017, accessed august 30, 2018,
https://pfnet-research.github.io/tgan/,
https://arxiv.org/abs/1611.06624).
Biography
Ursula Damm has become known for her
installations dealing with geometry and its
social impact on public space. In 2016,
Turnstile, a permanent interactive public
artwork
in
Düsseldorf/Germany
was
inaugurated. Ursula Damm’s works are shown
worldwide in exhibitions and festivals.
Since 2008 she holds the chair of Media
Environments at the Bauhaus-University
Weimar/Germany, where she established a
Performance Platform at the Digital Bauhaus
Lab as well as a DIY Biolab.
Cultural perspective
From a cultural perspective, we are questioning
if the shift of the perspective from analysis to
fiction can help to asses our analytical
procedures in a different way – understanding
them as normative examples of our societal
fictions serving predominantly as a selfreinforcement of present structures? Thus,
unbiased artistic navigation within the
excess/surplus of normative options of actions
might become a warrantor for novelty and the
unseen.
References
1. Ursula Damm, ‘Transits’ (2012) accessed
August 30, 2018, http://ursuladamm.de/transits2012/.
2. Ursula Damm, ‘598’ (2009) accessed August
30, 2018, http://ursuladamm.de/598/.
3. Ursula Damm, ‘Chromatiographic Ballads’
(2013)
accessed
August
30,
2018,
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
155
The Fresnel Video Lens
Steve Boyer
California State University, Long Beach
steve.boyer@csulb.edu
Abstract
The Fresnel Video Lens (FVLens) is a two
dimensional array of video monitor/camera
pairs that is intended to visually connect
adjacent spaces through an optoelectronic
medium that serves as both window and lens. It
is an exercise in active optics, the term coined
by Paul Virilio to refer to capabilities enabled by
the optoelectronic decoupling of source (direct
light) and signal (indirect light). [1] The
FVLens borrows from the principle of the
optical Fresnel lens which reduces the mass of a
traditional glass lens by dividing it into multiple
concentric thin sections with surfaces that match
the refractive properties of the original surface
geometry but with reduced thickness. Likewise
the FVLens flattens the geometry of curved
displays that require a depth equal to the sagitta
(height) of the arc of the display (figs. 1, 2). This
allows for it to be installed within a wall of
standard thickness serving as a window between
adjacent spaces. Although it could be used as a
telepresence display by transmitting video
streams from remote locations, the primary
exercise is one of constraint, examining methods
for reintegrating
Fig 1. Arc Configuration Diagram, 2018, Steve Boyer
156
bifurcated spatial experience. Rather than
traditional panoramic views the perspective
distortions of the Fresnel Video Lens follow the
lead of the fragmented imagery found in some
of the photo collages of David Hockney such as
Sun on the Pool Los Angeles 1982 (fig. 3),
Kasmin Los Angeles 28th March 1982, and
Brooklyn Bridge, 1982. Instead of stitching
multiple sources into a single seamless image
these constructions more closely match the realtime assembly of visual fragments into the
cohesive perception of space that takes place
when we see.
Fig.3. Sun on the Pool, Los Angeles, 1982, David Hockney,
composite polaroids,
Fig 2. Fresnel Configuration Diagram, 2018, Steve Boyer
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Fresnel Video Lens. Steve Boyer
Background
Digital media tend to void space by drawing the
viewer into the space of the media. My work
aims to amplify space by drawing the media into
the space of the viewer.
Video screens of every scale are dominating
the built environment from smartphone screens,
television screens in restaurants, gas stations and
train stations, to the skyline scaled LED images
that are bringing the dystopian vision of Blade
Runner to cities around the world.
The vast majority of the content that appears
on these screens is spatially decoupled from its
environment. This amounts to the injection of
invasive content which has the impact of
drawing our attention away from the
environment and into the content of the screen.
This results in the formation of disintegrated
spaces and bifurcated experiences in which we
are torn between both worlds. Little effort is
made to integrate these experiences by limiting
content to audio and imagery that are spatially
coherent. Invasive content is space negating.
The FVLens is offered as a platform to examine
optoelectronics that are space affirming by
reintegrating content with environment.
Perspective Distortions
The FVLens serves as a window providing a link
between adjacent spaces. Unlike with standard
flat screen views the perspective distortion of
the FVLens is a more natural one allowing
viewers to see multiple perspectives
simultaneously rather than the single planar
projection of an image onto a flat surface.
The current embodiment of the FVLens
proposes a 5x7 array of 35 Rasperry Pi cameras
and monitors with Processing and OpenCV
installed to provide the ability to process the
video streams (fig. 4). While the live feed from
the cameras is passed through to the monitors
mostly unaltered the platform allows for minor
manipulation of the video signal, especially
subtle distortions of time including frame
skipping, expanding and compressing time, as
well as some spatial modifications such as
changing apparent focal length. Artificially
imposed constraints of allowed and disallowed
operations are designed to maintain the integrity
of the FVLens as an optical device rather than a
medium for invasive content. As these subtle
manipulations are introduced to the live camera
streams the boundaries of native versus invasive
content can be explored and defined.
The next iteration will add 2 servo motors to
each camera/monitor pair. This functionality
allows for moving the focal point of the FVLens,
converting from a convex to a concave lens and
other potential enhancements. The FVLens will
be a platform for examining the complex
relationships between our digital and physical
presence.
Fig 4. 5x7 Fresnel Video Lens, 2018, Steve Boyer
References
1. Paul Virilio, Open Sky (London and New
York: Verso, 1997), 35-36.
Biography
Steve Boyer is an artist, designer, inventor and
educator with over 30 years of experience
developing technology and creating content for
a wide variety of interactive media including
video games, electronic toys, musical
instruments and installations. He has been on the
faculty of leading art and design programs in the
US including The School of the Art Institute of
Chicago, Otis College of Art and Design, The
University of California, San Diego and is
currently Assistant Professor of Design at
California State University, Long Beach. Mr.
Boyer earned his Master of Architecture degree
at The Southern California Institute of
Architecture (SCI-Arc) where his thesis
research addressed the growing tensions
between digital media and architectural space.
He also served as the Director of Research and
Development for Interactive Entertainment at
Vivendi Games and is the inventor of the
volumetric LED display.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
157
MAC Check
Scott Fitzgerald
Industry Assistant Professor of Integrated Digital Media
Co-Director of IDM
Associate Director IDM Online
Technology Culture and Society
Tandon School of Engineering
New York University
PhD student
Department of Media Study
University at Buffalo
shf220@nyu.edu
Abstract
MAC Check is an installation with a fictional
companion RFC that imagines a group of
networked devices that become sentient and
rebel against the structures of a human-based
network naming convention. A mock RFC
written by the devices lays out the methodology
the machines use to provide their canonical
names. Their desired names, functioning as
network addresses, are agreed upon by consent
and stored in each device. While this enables
fast one-to-one communication when the names
are agreed upon, until the consensus on the
names are reached by every device on the
network all other information transmission is
halted. The other side effect of this is that the
network becomes unusable by humans. The
companion installation is comprised of five
devices connected to a local mesh network.
OLED screens report the conversations held by
the devices, reporting their internal states for
observers to view.
Creating a Canonical Name
MAC Address concerns itself with the political
implications of intelligent machines that learn
behavioral models from humans. It questions
ideas of sentience, responsibility, and power
relations between humans and objects.
The text and installation parts of the work are
examples of speculative fiction. It starts with the
question “What do these objects want?” and
158
attempts to answer from the perspective of the
devices themselves.
The physical installation introduces behaviors
not addressed in the paper, though it still has the
core ‘quirk’ of the system in that the devices ask
for consensus when determining their names. At
boot, each device chooses a name for itself from
a randomly generated list, and asks the rest of
the connected devices if it can use that name. If
so, it can begin to communicate about other
topics. If not, it needs to choose a new name and
wait for it to be approved by the broader
network.
Text is broadcast across all nodes in the
network, so that the internal status is rendered
visible for any observers. Not only is the process
of deciding on the names made transparent, so
too are the internal states of the devices. Pulled
from an online corpora of “interesting stuff,” the
devices communicate various states of desire on
their part, including emotional states they will
never feel, and their desired function in society.
[1]
As an example of research oriented art
practice, the piece draws on multiple sources for
inspiration. The actual method of finding
consensus in this fashion is inspired by the
Occupy protests and the democratically fair, but
often inefficient “Mic check” protocol
employed by participants. [2]
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
MAC Check. Scott Fitzgerald
Political Implications
As a matter of control, DNS imposes a
hierarchical structure on network naming that is
bureaucratic in nature. [3] “Authoritative”
machines are the resource we rely on to translate
IP addresses to human readable names. Asking
“what does the network want?” is the first step
in pushing against this form of control and
structure.
Friedrich Kittler postulated that machines
have taken over the path of history from
mankind. [4] As we cede more agency of
human affairs to machines, it’s not unreasonable
to believe that the devices will have their own
desires that are sometimes in conflict with ours.
What is efficient for us is not necessarily
efficient for these machines. How they come to
decisions may mimic our processes, or it may be
completely foreign to us. This work is an
attempt to understand how these objects might
behave and alter what works for us to suit their
own needs.
Supplementary Information and Documentation
Video documentation of the work as it was
developed can be viewed at https:// vimeo
.com/281452624.
The fictional RFC can be accessed
at
http://bit.ly/2wz4Jit.
References
1. https://github.com/dariusk/corpora.
2. Zeynep Tufekci, Twitter and Tear Gas (New
Haven: Yale University Press, 2017), 100.
3 Alex Galloway, Protocol (Cambridge, MA:
MIT Press, 2004), 141.
4. Friedrich A. Kittler, Gramophone, Film,
Typewriter (Stanford, Ca: Stanford University
Press, 1999), 258.
Biography
Scott Fitzgerald is an artist and educator working
with contemporary technologies. His recent work
includes artistic applications of machine learning,
networked devices, and temporary co-locative
spaces. He is the co-Director of New York
University's Integrated Digital Media program in
the Tandon School of Engineering and working
towards a PhD at SUNY Buffalo's Department of
Media Study. He is also partner at lightband
Studios, creating bespoke glass and dynamic
lighting installations. Previously, Scott worked on
documentation for the Arduino platform and was
the founding head of NYU Abu Dhabi's
Interactive Media program.
Fig 1. MAC Check (detail of installation view), 2018,
Scott Fitzgerald, electronics, code, battery, Photo
courtesy of the artist.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
159
Visualizing Algorithms: Mistakes, Bias, Interpretability
Catherine Griffiths
University of Southern California, School of Cinematic Arts
griffitc@usc.edu
Abstract
This design research project addresses the
domain of obfuscation and ethical bias at the
heart of machine learning algorithms. By
opening the algorithmic black box to
visualize and think through the meaning
created by algorithmic structure and process,
this project seeks to provide access to and
elucidate the complexity and obfuscation at
the heart of artificial intelligence systems.
The questions being addressed include:
Can tactics from the visual arts and digital
humanities, including interaction design,
generative design, and critical code studies,
combine as an effective method to visualize
ethical positions in algorithms, including
bias, mistakes, and interpretability? How can
visualization of algorithms be used as an alinguistic tool to re-engage with decisionmaking in prediction systems, where humans
are at risk of being precluded? When
considering bias augmentation, what can be
learnt by temporarily isolating the meaning in
data, to focus on the effect that structure and
process play in the generation of bias?
The work-in-progress prototype software
visualizes a machine learning algorithm, a
decision tree classifier. It simulates data
flowing through the algorithm and
predictions being made in real time. It is built
procedurally as an interactive tool, so that
any classifier of the same type can be loaded
and visualized. The UI provides parameters
to support the self-organization of the
classifier structurally and to aid analysis. The
loaded examples present different topologies
of classifier based on machine learning data
160
sets with different feature to class ratios. The
prototype can currently visualize mistakes in
prediction, where the algorithm misclassifies
data. It can also reverse engineer each data
point’s path to visualize where in the
algorithm an error was made. The most
popular paths taken through the algorithm’s
complex network of decisions are also
visualized.
The project is conceived using a conceptual
approach to machine learning, to experiment
with how aesthetics and design can be used
as tactics for engagement with complexity.
Tactics include: a move away from data
visualization
toward
computational
visualization to focus on real-time and even
projected rule sets, rather than a retrospective
and fixed approach to data. Adapted insights
from programming games and animation are
used to present both human-scale and
emergent processing speeds, the flow of data
through an algorithm, and how decisions are
made in real-time.
The research is working toward the use of
visual arts tactics as a means of “ethical
debugging”, in which complex terms, such as
bias and interpretability can be presented
visually, and algorithms can be engaged with
aesthetically as socio-political systems. [1]
As the research continues to develop, more
speculative design avenues will be explored,
alongside technical problems. The project so
far has concentrated on developing a more
robust visualization of a machine learning
algorithm to engage and collaborate with
computer scientists working in this field. As
the research develops, the intention is to
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Visualizing Algorithms: Mistakes, Bias, Interpretability. Catherine Griffiths
develop further scenes of this application that
navigate more strongly, even contentiously,
back toward the visual arts, to explore the
potential for “novel models of relationality
and connectivity.” [2] An overarching
question asks, how artistic knowledge can
contribute to the issues of the day, generating
new ideas, proposals, and methods, using
aesthetics as the primary paradigm of
knowledge generation, without solely
assimilating with traditional scientific
methods.
References
1. Catherine Griffiths, “Visual Tactics
Toward an Ethical Debugging,” Digital
Culture & Society: Rethinking AI, 4, no. 1
(2018): 217.
2. Simon O’Sullivan, “Inquiry,” in NJP
Reader 1: Contributions to an Artistic
Anthropology, ed. Youngchul Lee and Henk
Slager (Seoul: Nam June Paik Art Center,
2010), 52.
Biography
Catherine Griffiths is a PhD candidate in
Interdisciplinary Media Arts + Practice at the
University of Southern California, School of
Cinematic Arts. She researches at the
intersection of visual art, computation and
critical studies, focusing on the visualization
of algorithms, in the context of machine
learning and the ethics of algorithms debate.
She has a bachelor's degree in Fine Art from
the University of the Arts, London, and a
master’s degree in Architecture from
University College, London.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
161
Multimedia Art: The Synthesis of Machine-generated Poetry and
Virtual Landscapes
Suzana Ilić
Martina Jole Moro
University of Innsbruck
Department for Linguistics
io.suzanai@gmail.com
University of Innsbruck
Department for Architecture
martina.moro.mjm@gmail.com
Abstract
Artificial Intelligence, Virtual and Augmented
Reality are transforming multimedia art,
offering the opportunity for novel creative
human-machine collaborations and assisted
design. In this work we demonstrate the
synthesis of a character-level long short-term
memory network for generating poetry and
L.e.O. (Luci e Ombre; Lights and Shadows), a
virtual landscape composed of dynamic
architectural elements and surfaces, providing
an immersive digital art experience.
largely coherent from a semantic perspective,
where expressions like soul of the storm can be
interpreted as metaphors. The linguistic style
matches approximately the requirements and
aesthetics of poetry, however, there is a striking
word-level repetitiveness (see Table 1).
Generating poems with LSTMs
Recurrent neural networks (RNNs) encompass
high-dimensional hidden states and are able to
iterate over sequences of arbitrary size, and
process and memorize information. [1] RNN
variants are commonly deployed in the field of
natural language generation. [2] We trained a
character-level long short-term memory
network (LSTM) on a dataset of 1.3M
characters of classical and contemporary
poems, where the network receives an input at
each timestep, updates its hidden state and
predicts one character at a time. The model
architecture comprises an LSTM layer with 128
hidden units, followed by a Dropout layer (0.2)
as a regularization technique to avoid
overfitting. [3] The best results were achieved
with the Adam optimizer, a learning rate of
0.0005 and a categorical cross-entropy loss.
The selected poem was generated during epoch
105 and was sampled from a range of diversity
values for the temperature parameter in order to
experiment with uncertainty. The poem shows
errors in morphology and syntax, but seems
162
on a charred spinning wheel,
the world was cold the soul of the storm,
the shadow s soul where the strong she still,
the stars that beautiful and strain,
and the strange and the storm of the stars,
and the stars of the storms of the stars,
i say i shall be the made the stars of the storm,
the stars when the wind of the stream of the
shadow,
the thing of the said the world was a sea,
and the shadow of the sky
Table 1. The curated machine-generated poem reveals
interesting, novel metaphors.
3D-modeling: The virtual landscape
L.e.O is an alternate reality composed of real
light and shadows, where nine distinct
silhouettes were extracted from the original
structure. The selected objects were then
deconstructed, analyzed and reassembled in a
different manner. Subsequently, the pieces
were scaled up and down depending on their
role in the virtual environment. The entire
digital island is made of the same selection of
the original nine lights and shadows using a
modular system and was developed in
Rhinoceros 3D, a design application software,
and then imported into Unity for adding a range
of different textures to the environment (see
Fig. 1). [5] As a final step, we blended the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Multimedia Art: The Synthesis of Machine-generated Poetry and Virtual Landscapes. Suzana Ilić, Martina Jole Moro
human-read audio recording of the AI poem
into the video sequence, which, experienced in
VR, gives the illusion of exploring a surreal
virtual environment, while hearing the
machine-generated poem in the background.
Fig. 1. The digital 3D-island L.e.O (Luci e Ombre), a virtual
landscape composed of deconstructed light and shadow objects.
Conclusion
Creative design and visualization projects can
be enhanced by Artificial Intelligence in
various ways, such as leveraging deep learning
models for image, video and text generation.
Thus, it can be used for content creation as well
as for assisting humans in the creative process.
This multimedia art project demonstrates how
two creative streams can be merged: The
synthesis of (1) a 3D model of a virtual
landscape, created through modular and
patchwork assembly, and (2) a poem generated
by a character-level LSTM trained on a dataset
of 1.3M characters of poems. Future work can
include models such as Generative Adversarial
Networks for generating novel virtual
landscapes.
Acknowledgements
We gratefully acknowledge the contributions
for this project: Johannes Felder (video),
Christian Anich (music) and Josiah Sampson
(voice).
recurrent neural networks." In Proceedings of
the 28th International Conference on Machine
Learning (ICML-11) (2011): 1017-1024.
2. Gatt, Albert, and Emiel Krahmer. "Survey of
the State of the Art in Natural Language
Generation: Core Tasks, Applications and
Evaluation." Journal of Artificial Intelligence
Research 61: 65-170 (2018).
3. Wojciech, Z., Sutskever, I., Vinyals, “O.:
Recurrent neural network regularization.”
arXiv preprint arXiv:1409.2329 (2014).
4. Lee, Ghang, Charles M. Eastman, Tarang
Taunk, and Chun-Heng Ho. "Usability
principles and best practices for the user
interface design of complex 3D architectural
design and engineering tools." International
Journal of Human-Computer Studies 68, no.
1-2 (2010): 90-104.
Biographies
Suzana Ilić is a PhD student (Linguistics and
Media Program) at the University of
Innsbruck/Austria. Previously she was a
visiting researcher at the National Institute of
Informatics in Japan, where she worked on
affect-sensitive deep learning models for text.
Her research interests include sentiment
analysis and text-based affective computing, as
well as generative models for creative language
output. She is currently working on
conversational systems (NLU) in Tokyo, Japan.
After a career in competitive sports and
subsequent work in journalism, both as a writer
and photographer, Milano born architect and
artist Martina Moro started her studies in
Architecture
at
the
University
of
Innsbruck/Austria. She is currently working on
art projects in the fields of design, computer
technology and architecture. She contributed to
numerous exhibitions in Austria and Italy,
among them the Venice Architecture Biennale
2018.
References
1. Sutskever, Ilya, James Martens, and
Geoffrey E. Hinton. "Generating text with
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
163
Microbial Sonorities
Carlos Castellanos, Ph.D.
Department of Art, Kansas State University, U.S.A.
ccastellanos@ksu.edu
Abstract
Microbial Sonorities explores the use of sound
to investigate the bioelectric and behavioral
patterns of microorganisms. The piece features
a hybrid biological-electronic system wherein
variations in electrical potential from an array of
microbial fuel cells are translated into rhythmic,
amplitude and frequency modulations in
modular electronic and software-based sound
synthesizers.
Introduction
Microbial Sonorities explores the use of sound
to investigate the bioelectric and behavioral
patterns of microorganisms. Based upon
inquiries into emerging bioenergy technologies
and ecological practices as artifacts of cultural
exploration, the piece features a hybrid
biological-electronic system wherein variations
in electrical potential from an array of microbial
fuel cells are translated into rhythmic, amplitude
and frequency modulations in modular
electronic
and
synthesizers.
software-based
sound
Research Focus
The research focuses on three primary areas: (1)
Microbial Fuel Cells (MFCs): these are devices
that generate electricity from the metabolic
reactions of bacteria found in diverse
environments such as lakes, compost and
wastewater. [1] (2) Modular hardware and
software synthesizers: The bioelectrical
fluctuations of the MFCs are used as modulation
and trigger sources for a Eurorack-based
modular synthesizer and/or a custom-designed
software synthesizer built in Max/MSP
(cycling74.com).
This
entails
building
electronic circuits to amplify the electrical
signals generated by the bacteria and software to
translate the signals into control voltage (CV)
sources appropriate for the synthesizer. (3)
Machine
Learning:
Machine-learning
Algorithms are used as a way of interpreting the
shifting electrical patterns generated by the
Fig 1. Microbial Sonorities installed at Washington State University, Pullman, Washington, USA in 2016. The modular synthesizers
are shown in the center behind the microbial fuel cells.
164
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Microbial Sonorities. Carlos Castellanos
bacteria. Pattern recognition/classification is
used to trigger synthesizer presets and CV gate
signals while statistical regression is used to
predict variations in electrical potential. If a
comprehensive
understanding
of
the
bioelectrical patterns can be attained, it will be
used to inform the development of a sonic
compositional system that is dictated by these
patterns. In essence, allowing the bacteria to
“express” themselves sonically.
System Overview
The current system set-up typically consists of
four MFCs, a Eurorack modular synthesizer
system, an Arduino microcontroller (arduino.cc)
and the Max/MSP graphical coding
environment (cycling74.com). The biomatter
used for the MFCs is usually fresh compost or if
possible, benthic mud from a local lake or other
aquatic body. Voltage from each MFC is
amplified and connected to an analog input on
the Arduino. In some cases it may also be
plugged directly into the control voltage input
on one of the Eurorack modules.
Fig 2. Typical voltage curves for a microbial fuel cell. The
horizontal axis represents time (in hours) while the vertical
axis represents voltage (in millivolts). Taken from [2].
The piece operates on two temporal scales.
The first, which I call “immediate,” consists of
a simple linear mapping of voltage to pitch for
each MFC. Transient voltage spikes are also
detected and mapped to sound. The second time
scale, “longitudinal,” is a longer-term (24-48
hours) mixing of Eurorack synth patches. Each
MFC is assigned a synthesizer patch according
to its current “life stage.” A life stage is simply
a point in the overall voltage curve over which a
typical MFC travels over the course of 24-48
hours before it “dies” (i.e. when the bacteria run
out of organic matter to metabolize; see fig. 2).
[2] Four life stages have been identified and
assigned a synthesizer patch. A regression
curve, using a neural network, was then created
to mix/transition between the four different
sounds/patches. Training data for the network
was created simply by drawing a curve in Max’s
itable object that matches a typical MFC voltage
curve. The x coordinates of the itable represent
discreet time steps (0-50 hours), while the y
coordinates represent voltages (0-1000
millivolts). While the piece is running, a running
average of the voltage is kept for each MFC and
sent out to the neural network application once
every 30 minutes via OSC (opensound
control.org).
Conclusions & Future Work
In addition to exploring different scales and
construction materials for the MFCs, other
features beyond voltage and electrical properties
(e.g. chemical properties) are currently being
investigated. Overall, the use of sound and
machine learning as methods of bridging human
and microbial lifeworlds and exploring the
material agency of microorganisms continues to
be an exciting area worthy of continued, playful
investigation. More information on the project is
available
online
at
ccastellanos.com/
projects/microbial-sonorities/.
References
1. Bruce E. Logan, Microbial Fuel Cells
(Hoboken, N.J: Wiley-Interscience, 2008).
2. M. Azizul Moqsud et al., “Bioelectricity from
Kitchen and Bamboo Waste in a Microbial Fuel
Cell,” Waste Management & Research: The
Journal of the International Solid Wastes and
Public Cleansing Association, ISWA 32, no. 2
(February 2014): 124–30.
Biography
Carlos Castellanos is an interdisciplinary artist
and researcher with a wide array of interests
such as cybernetics, ecology, embodiment,
phenomenology, artificial intelligence and
transdisciplinary collaboration. His work
bridges science, technology, education and the
arts, developing a network of creative
interaction with living systems, the natural
environment and emerging technologies.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
165
Part III. Artistic project abstracts
Castellanos is Assistant Professor and director
of the Digital/Experimental Media Lab in the
Department of Art, Kansas State University.
166
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The 360° Video Secret Detours as Case Study to Convey Experiences
through Immersive Media and the Method of Presentation
Elke Reinhuber
School of Art, Design and Media ADM / NTU Singapore
elke@ntu.edu.sg; eer@me.com
Benjamin Seide
Ross Williams
School of Art, Design and Media ADM / NTU Singapore
bseide@ntu.edu.sg
School of Art, Design and Media ADM / NTU Singapore
rawilliams@ntu.edu.sg
Abstract
Our recent work Secret Detours served as an
immediate approach to digitally preserve a
Chinese garden in Singapore and has been
conceived as an immersive 360° video. We have
investigated several different presentation
modes in order to explore the screening
possibilities. The constraints and limitations of
each mode has necessitated a reconfiguration of
visual and audio composition. The experience of
the work and the aesthetic and technological
decisions that informs it, varies significantly,
depending on whether the work is collectively
viewed in a hemispherical dome, a cylindrical
panorama, a panoramic LED video wall or with
a range of different VR headsets.
Secret Detours
Secret Detours was filmed with 360° spherical
video in a Chinese garden in Singapore, which
opened in 1956 – fairly old for the 53 years old
city state. The garden is currently undergoing
massive re-development, several old trees were
logged, bridges and pavilions were removed. As
it was important to act fast, myself and, my two
collaborators, Benjamin Seide and Ross
Williams, decided to capture the garden with
360° imagery, not only for artistic purposes but
for conservation reasons as well. Four dancers
acted out a choreography to represent the
cardinal directions of Chinese Mythology, after
which the garden was initially conceived.
Fig 1. Secret Detours, 2018, Reinhuber, Seide, Williams
Forking paths in the south-east of Yunann Garden, represented
by the dancers dressed in vermillion and azure. 360° video in
equirectangular projection on a flat screen.
Although the visualization gives an
impression of being inside the garden, it is still
a very static experience and therefore, we are
currently working on a room scale model for
VR, based on photogrammetric assessments to
restore the garden according to the floor plan in
the virtual space.
Fig 2. Different to the immersive experience in a surrounding
projection, the dancers on the planar panoramic video wall
NEXUS accompany passers-by.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
167
Part III. Artistic project abstracts
standardization in channel configuration
(outside of cinematic presentations) and the ever
present issue of variable room acoustics and
ambient noise. The recent resurgence of
ambisonics and binaural techniques for
headphones in VR offer a way to mitigate some
of the standardization issues mentioned, but not
without limitations.
Fig 3. Larger than life-size presentation in the Digitalis dome
with one projector invites the viewer to sit down and observe.
However, considering the respective iterations
we already produced, the perception of the 360°
screenings differs hugely, depending on the
particular presentation technique. Since the
technology around spherical recording and
displaying is still in flux, due to the rapid
developments and along with particular
improvements by the industry, competing for
market penetration.
Fig 5. The shared experience of viewing Secret Detours in a
cylindrical panorama – the ideal set up for the mobile spectator.
Table 1:
Current screening formats of Secret Detours.
Resolution
Original
footage
VR headset
Cylindrical
panorama
Panoramic
LED video
wall
Fulldome
Fig 4. Presentation of Secret Detours in a 7 metre Fulldome
with four HD projectors.
For 360° media, the facilitation of viewing
techniques has only just began. After Morton
Heilig’s and Ian Sutherland’s first approaches
with ray-cathode tubes in front of the user’s eyes
within a bulky set up, the facilities today range
from DIY cardboard solutions, which
immensely popularized the medium to high-end
immersive environments. In particular standalone headsets for 360° media (including
stereoscopic viewing experiences) appear to be
a promising solution, even when the obvious
limitations have to be contemplated. Sound
presentation is similarly affected with little
168
MiniFulldome
Flat
Screen,
VLC, GoPro
or YouTube
Audio
Width
in px
7680
Height
in px
3840
Geometry
equirectangular
mute
7680
3840
spherical
Binaural
5248
608
cylindrical
3840
480
planar
2048
2048
hemispherical
5.1
1200
1200
hemispherical
5.1
8
channel
5.1
Binaural
screen
resolution
planar,
scrollable
Biography
Elke Reinhuber, Benjamin Seide and Ross
Williams currently teach and research in Media
Art at ADM, School of Art, Design and Media
at NTU Singapore. With their experience and
expertise in the areas of sound design
(Williams), special effects and imaging (Seide)
as well as camera and concept (Reinhuber), they
explore the fascination and possibilities of
immersive media from different points of view,
especially in regard to representations of
culturally relevant subjects.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Parallax Relax: Expanded Stereoscopy
Max Hattler
City University of Hong Kong
mhattler@cityu.edu.hk
Abstract
In recent years, stereoscopic films, virtual reality
(VR) and augmented reality (AR) have matured
and proliferated. This
newly-emerging
stereoscopic status quo operates within the same
principles set out at the beginning of the
technology: stereoscopy produces 3D depthperception from the stereoscopic fusion of left
and right images. Yet, beyond the normative
practice of emulating human vision, stereoscopy
can be leveraged to offer new perceptions and
aesthetics.
While phenomena such as binocular rivalry
are well researched within cognitive
neuroscience and psychophysics, their artistic
potential remains largely untapped. Artists such
as Salvador Dali, Memo Akten and Blake
Williams are among the few who have explored
this territory. We propose the term expanded
stereoscopy to describe stereoscopic processes
which create spaces where depth relations are
disjointed and paradoxical, where binocular
rivalry is used to create unique visual effects or
to guide viewer attention, or where new
dimensionality and visual intensity are
excavated from flat source material. Such
expanded, technologically-aided uses of
stereoscopy allow for ways of seeing that are
impossible in the real world and can be seen as
a true expansion of the senses.
Parallax Relax presents a discussion of some
of the challenges and findings of our ongoing
arts-based research into expanded stereoscopy,
across the fields of single-screen projection,
audio-visual live performance, and 360-degree
immersive media, which began with the creation
of III=III for Animamix Biennale 2015-16.
Fig 1. III=III, 2016, Max Hattler, stereoscopic digital
animation.
Biography
Max Hattler is an artist and academic who works
with abstract animation, video installation and
audiovisual performance. He holds an MA in
Animation from the Royal College of Art and a
Doctorate in Fine Art from the University of
East London. His work has been shown at
festivals and institutions such as Resonate, Ars
Electronica, ZKM Center for Art and Media,
MOCA Taipei and Beijing Minsheng Museum.
Awards include Supernova, Cannes Lions,
Bradford Animation Festival and several Visual
Music Awards. Max has performed live around
the world including at Playgrounds Festival, ReNew Copenhagen, Expo Milan, Seoul Museum
of Art and the European Media Art Festival. He
is an Assistant Professor at School of Creative
Media, City University of Hong Kong. Max’s
current research focuses on synaesthetic
experience and visual music, the narrative
potential of abstract animation, and expanded
artistic approaches to binocular vision.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
169
The Electronic Curator or
How to Ride Your CycleGAN
Eyal Gruss
Mahanaim 134 Tel-Aviv University
eyalgruss@gmail.com
Abstract
The Electronic Curator examines whether a
computer can not only generate art, but also
evaluate its quality. [1] The work uses a
Generative Adversarial Network (GAN), which
constitutes a dialog between two competing
neural networks. Here one represents a painter,
who turns a human face into a vegetable portrait
(fig. 1). The other represents a curator, who
evaluates whether the portrait indeed looks like
vegetable faces and encourages the painter to
improve. The dialog between the competing
networks represents the artistic process.
Training is unsupervised based on the cycleconsistent generative adversarial networks
(CycleGAN). [2] Thus we require only a set of
face images and an unpaired and unrelated small
set of vegetable-faces collected from a Google
search on the Internet (fig. 2). In order to avoid
mode collapse and get diverse and interesting
results, we use a modified loss function inspired
by DistanceGAN. [3]
In exhibition mode, the painter observes the
spectator's face and turns it in real time into a
vegetable-face. The curator then grades the
outcome. If the outcome is good enough to
confuse the curator, a curatic text is generated
based on the vegetables and fruits found in the
portrait by object detection (fig. 3). In a world of
AI art and creative machines, will the art of
curation remain reserved for humans?
In the talk, we will review the techniques that
helped in training and in inference, as well as
those which did not help. Namely, we will
discuss data collection and training strategies,
modifications to the loss, and inference time
normalization.
170
Eran Hadas
Mahanaim 134 Tel-Aviv University
ehadas@gmail.com
Fig 1. A vegetable-face generated in real-time in inference.
Fig 2. Unpaired samples from the training set in the two
domains.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Electronic Curator or How to Ride Your CycleGAN. Eyal Gruss, Eran Hadas
Fig 3. The first author's pretty face, its corresponding vegetable
portrait, and the curatic text generated for it. From an exhibition
at Heinz Nixdorf MuseumsForum, Paderborn, Germany.
References
1. Project video: youtube.com/watch?
v=4sZsx4FpMxg.
2. Zhu, Park, Isola and Efros, Unpaired Imageto-Image Translation using Cycle-Consistent
Adversarial Networks, junyanz.github.io/
CycleGAN.
3. Benaim and Wolf, One-Sided Unsupervised
Domain Mapping, arxiv.org/abs/1706.00826.
Biographies
Eyal Gruss is a machine learning researcher and
an artist. He is based in Israel and holds a PhD
in physics. His works include poetry, interactive
installations and computer-generated art.
Eran Hadas is an Israeli poet, software
developer and media artist. Among his
collaborative projects are a headset that
generates poems from brainwaves (with Gruss),
and a documentarian robot that interviews
people about the meaning of being human.
Hadas was the 2017 Schusterman Artist-inResidence at Caltech. He teaches in the New
Media Program at Tel-Aviv University.
Mahanaim 134 is Gruss and Hadas' tech-art
collaboration.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
171
Das Fremde Robot Installation
Michael Spranger
Stéphane Noël
Sony Computer Science Laboratories Inc., 3-14-13
Higashigotanda, Tokyo, Japan
Michael.spranger@gmail.com
Artist and Curator
snoel@me.com
Abstract
We discuss a recent award winning Artificial
Intelligence installation that deals with
autonomous meaning creation in machines.
The cultural identity of this micro-society faces
foreign cultural elements - the visitors. Like any
indigenous population gradually invaded by an
outside population, its culture is forced to
expand, to hybridize, withdraw or possibly
surrender.
The installation integrates recent techniques
in AI: deep learning, deep reinforcement
Installation
In a dimly lit space, a tribe of robots is busy. The
members of this colony observe the world
around them and try to describe it using a
language they create in real time. Each identifies
the elements in its vicinity, invents a word for it,
and communicates that name to its counterparts.
Together they create a language that the whole
village can understand, and thus build the
common culture of this artificial species.
Suddenly another species intervenes,
disrupting the quiet atmosphere of this
community. Humans enter the space. A person
approaches, trying to grab the attention of the
agents, which turn their camera-eyes and
microphone-ears towards him or her. The visitor
initiates a form of communication. The robots’
culture is put to the test. How do the robots deal
with the novel objects? Will the culture resist
these external interventions, will it adapt its
vocabulary and evolve, or will it simply vanish,
to be replaced by a dominant human culture that
is totally external and unknown to it?
Das Fremde immerses visitors as explorers
who witness the birth of a language and the
evolution of the culture of another, non-organic
life form. The installation tries to capture the
moment of discovery: the moment the audience
turn into pioneers and ethnologists stumbling
upon an emerging civilization.
Das Fremde is a performative installation
featuring a species of artificially intelligent
entities that create their own language and
culture through a cultural evolutionary process.
172
Fig 1: Installation in Zurich, CH (11/2016)
learning as well as more traditional methods
such as rule-based approaches. [2] It serves as a
showcase for the abilities of current systems to
generate symbolic culture and autonomous
meaning.
Concept and Discussion
Das Fremde is German and refers to something
between the strange and the stranger – or both at
the same time. We apply this concept for the
installation in two ways.
On the one hand, Das Fremde examines the
function of the foreign for the definition and
self-construction of cultural identity. The robots
see the visitors as foreign objects. Driven by
curiosity or boredom, excitement and disinterest
they interact with and about the visitors, which
in turn shapes their culture and influences the
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Das Fremde Robot Installation. Michael Spranger, Stéphane Noel
construal and conceptualization of reality.
Concepts and words emerge about the visitors.
Sights and sounds might be picked up, imitated
and ultimately become frozen into concepts and
cultural memory.
On the other hand, the installation is a foreign
object to us the visitors. For us, especially in
Europe there is a cultural stream of seeing
machines as foreign entities. Separated from us
humans by foreign codes, strange behaviors and
exotic sights and sounds. The installation
questions our relationship with machines and
makes the divide directly experienceable. This is
especially important at a time where machines
and machine intelligence are part of a heated
discourse. What it is like to be among machines,
which do not assemble cars behind fenced off
areas but instead seem to demonstrate a level of
autonomy and independence that casts the
human being in the role of outsider.
Das Fremde offers visitors the opportunity to
immerse in a slow emotional process during
which one can witness the birth of a language
and the evolution of the culture of another, nonorganic life form. We capture the poetic moment
of discovery: the moment the audience turns into
explorers and ethnologists stumbling upon an
emerging civilization. Consequently, visual and
sensory aspects of the installation are designed
to favor an intimate encounter, rather than to
overload the visitors with escalating effects and
placatory discourse.
The installation follows in the footsteps and
extends earlier artistic work at the interface
between art and technology – such as The
Talking Heads Experiment, conducted in the
years 1999-2001 and recently published as a
book. [1] This was the first large-scale
experiment in which populations of embodied
agents created for the first time ever a new
shared vocabulary by playing language games
about real world scenes in front of them. The
agents could teleport to different physical sites
in the world through the Internet. Sites, in
Antwerp, Brussels, Paris, Tokyo, London,
Cambridge and several other locations were
linked into the network. Similarly, the ErgoRobots Experiment by Pierre-Yves Oudeyer and
collaborators investigated artificial curiosity and
language formation in robots as part of the
exhibition
“Mathematics:
A
Beautiful
Elsewhere” at Fondation Cartier pour l’Art
Contemporain Paris, France. Here robots were
equipped with mechanisms that allow them to
learn new skills and invent their own language.
Endowed with artificial curiosity, they explore
objects around them, as well as the effect their
vocalizations produce on humans.
References
1. L. Steels, The Talking Heads Experiment:
Origins of Words and Meanings, volume 1 of
Computational Models of Language Evolution.
(Berlin: Language Science Press, 2015).
2. M. Spranger, The Evolution of Grounded
Spatial Language (Language Science Press,
2016).
Links
Website: http://www.dasfremde.world
Dossier: https://tinyurl.com/y9p5j5by
Biographies
Michael Spranger received a PhD from the Vrije
Universiteit in Brussels (Belgium) in 2011 (in
Artificial Intelligence). He currently holds a
research position at Sony CSL Inc. Tokyo Japan.
Michael has published more than 60 peerreviewed papers on AI, developmental robotics
and computational linguistics. Michael has been
producing various art works reflecting on the
nature of Artificial Intelligence and our
relationship with machines: including robot
installations such as Confident Machines
(2011), Das Fremde (2016). He was also a
technical advisor for the opera Casparo (2011).
Stéphane Noël served as director of Les
Urbaines festival in Lausanne (1997-1998), and
as co-director of Belluard festival in Fribourg
(2004–2007). He has been on the artistic and
editorial board of Gaîté lyrique in Paris (2009–
2011) and acted as an advisor for European.Lab.
Stéphane Noël’s artistic work ranges from
screenwriter to media artist. Das Fremde is an
attempt at developing aesthetics concepts
around artificial intelligence and humanism.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
173
Repopulating the City:
Introducing Urban Electronic Wildlife
Greg Nijs
Guillaume Slizewicz
Urban Species, Dept. of Architecture, ULB
contactgregnijs@gmail.com
Urban Species, Intermedia, LUCA School of Art
Guillaume.slizewicz@luca-arts.be
Abstract
The artworks put forward in this presentation
draw upon the idea of the cyborg from an ecological perspective. [1] In addressing the question of bio-extinction and electronics [2], we
propose the introduction of 'urban electronic
wildlife' in public space as a way to induce
wonder and awareness in human urban dwellers.
To that end, we introduce newly cre- ated hybrid
species – i.e. physical devices with zoomorphic
and spectral traits packed with ma- chine
learning algorithms and exhibiting auton- omous
behavior – as an act of 'applied' specula- tive
fabulation. [3] [4] We currently have two
projects in development in our studio: Capricious Ghost and Stray Peddler.
Capricious Ghost is an installation which asks
passers-by to show them an object and reacts to
what it sees thanks to object detection
algorithms. It is made of a raspberry pi with a
camera running a detection algorithm trained on
the COCO dataset, a button, a speaker, a RF
emitter and a RF-connected plug socket. [5]
The artwork “speaks” to the user using an espeak and asks to see a certain type of object.
The detection is triggered by the push of a button. Once the button is pushed, the computer
will describe what it sees and if the object is
present, it will turn on the plug socket with Radio Frequencies. This plug socket can be used
by any electrical device. The way the set-up is
presented, what action it triggers and its appearance in public space are variable (much like
ghosts' apparitions).
Whatever form it takes on, Capricious Ghost
resonates with concerns both about ubiquitous
technology, the sentient city and animism as
well as extinction, radiation and ecologically
174
haunted humanity. [6] [7]
The Stray Peddler is a small robot that roams
freely in the city and delivers messages to ur- ban
dwellers. It was inspired by different experiences we had in the field, a public place in the
center of Brussels. It is a mix between a Jehovah
Witnesses’ trolley, an abandoned, quivering
circular saw, stray dogs and small electric devices
sold by street vendors. It also draws on the idea
that peddlers helped create public spaces by
conveying ideas, discussion, controversies and
stories in cities and between cities.
The peddler is made of a simple, off the shelf,
autonomous robot based on an Arduino microcontroller, with an added bluetooth speaker to
give him a voice. A raspberry pi is mounted on it
to give it the ability to detect and follow people
to deliver messages to them.
In order to change the meaning of their pres-ence
and enhance its zoomorphic attributes, we
camouflage it with fake fur and fake eyes.
We believe that both projects have the potential to question the relationship city dwellers have
with now ubiquitous technology, while the
underlying idea is to advocate for technologically generated life-forms as critters in their
own right and existence, not opposing natural and
technologically generated life-forms, but
reinforcing their bonds in their struggle for
survival on a damaged planet.
References
1. Ursula K. Heise, “From Extinction to Electronics,” in Zoontologies, ed. Cary Wolfe ( Minneapolis/London: University of Minnesota Press,
2003), 59.
2. Stina Attebery, “Coshaping Digital and
Biological Animals,” HUMaNIMALIA Vol. 6,
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Repopulating the City: Introducing Urban Electronic Wildlife. Guillaume Slizewicz, Greg Nijs
No. 2, Spring 2015, accessed August 15, 2018,
https://www.depauw.edu/humanimalia/issue12/
attebery.html.
3. Donna J. Haraway, Staying With the Trouble
(Durham: Duke University Press, 2016).
4. Lucienne Strivay et al., “Les Enfants du
Compost”, in Gestes Spéculatifs, ed. Didier
Debaise and Isabelle Stengers (Dijon: Les
presses du réel, 2015), 151.
5. Tsung-Yi Lin et al., “Microsoft COCO:
Com- mon Objects in Context”, working paper,
accessed August 2, 2018, https://arxiv.org/abs/
1405.0312.
6. Nigel Thrift, “The 'sentient' city and what it
may portend,” Big Data & Society April-June,
(2014): 1.
7. Anna Tsing et al., Arts of Living on a Damaged Planet (Minneapolis/London: University
of Minnesota Press, 2017).
edge production in its widest sense, in the field of
art and society at large. His particular inter- ests
revolve around issues of human and other- thanhuman relations, im/materiality, affect and
cognition, identity politics, the question of nature and technology, and a/biotic multispecies
entanglements. In his approach, he draws on a
range of social scientific and philosophical
sources such as science and technology studies,
design studies, cultural studies, cognitive sciences, HCI, pragmatism, and the like. Currently, he is conducting research on the development of smart tools for civic engagement with
a participatory design approach.
Biographies
Guillaume Slizewicz is a French designer
working at Urban Species (Intermedia Lab,
LUCA School of Arts), an interdisciplinary research group focusing on citizen participation in
the city of Brussels. His work is at the crossroad of political sciences and interaction design. Having completed Politics, Philosophy and
Economics at the University of Kent in
Canterbury and Sciences-po Lille, he specialized in Product development and design at
KEA Copenhagen School of Design and Technology and followed a course in Machine
Learning at CIID taught by Gene Kogan and
Andreas Refsgaard. He is interested in the interstices offered by electronic objects in the urban spaces, the unexpected behavior that
glitches provoke and the surprise created by
misused hardware systems and hijacked algorithms. With his team, he is thinking on how to
repopulate the city via new breeds of urban
electronic wildlife.
Greg Nijs is a sociologist working as a researcher at Urban Species (Dept. of Architecture, Université Libre de Bruxelles), an interdisciplinary research group focusing on citizen
participation in the city of Brussels. He is also
curator and co-director at c-o-m-p-o-s-i-t-e, a
Brussels-based non-profit art space. By staging
exhibitions Greg tackles questions of knowl-
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
175
Anonymous Conjecture
Fangqing He
New York University Shanghai
https://quinnhe.github.io/anonymousConjecture.html
Abstract
While most celebrities try hard to make their
faces recognizable, some decide to hide their
identities. Satoshi Nakamoto (the creator of
bitcoin), The Residents (a surrealist band),
Banksy (world’s top street artist), The Stig (an
expert-level test driver) are all mysterious
anonymities about whom hundreds of people
make conjectures. However, none of them are
assumed to be female. The project aims to force
people to make new conjectures: the anonymous
can be women.
Invisible Discrimination Against Women
As many feminist campaigns sweep the world,
more and more people are becoming aware of
issues like gender equality, and sexual
harassment etc. However, not all discrimination
is physical and visible. It’s easy to ignore mental
discrimination against females: something
invisible that even some women fail to
recognize. Creating an interactive experience,
the art project intends to make the audience
realize the invisible.
The project highlights the anonymous. People
love to make guesses about the mysterious
celebrities who intentionally hide their
identities. If people were questioned about their
identities, they would mostly assume that they
were male. The creator of bitcoin, Satoshi
Nakamoto, is considered to be a 37-year-old
man living in Japan. [1] Banksy, one of the
world’s top street artists, is regarded as a 28year-old white man. [2] Similarly, people
assume The Residents (a famous band) and The
Stig (the test driver for Top Gear) are all males.
However, can’t women be coding and business
geniuses? Can’t women do street graffiti and
express to the world their anger against wars and
176
desire for peace? Can’t women play music or be
a F1 driver? The answers are, no doubt, yes.
What the project does is to present an image
of the celebrity that people normally have and
break that image by revealing a female body
figure gradually. The audience move their hands
slowly and sweep the sand on the familiar male
faces which gradually reveal the female body
figures.
Conjecture, Not Conjecture
The project does not intend to offer a clear
answer for the identities of these anonymous
people. However, it wants to suggest that their
identities are women. Women can do things that
we assume they cannot.
Fig 1. Anonymous Conjecture, 2018, Fangqing He
Though the project is still in progress, the
female figure already tells many stories: a
pregnant woman, a woman who smokes on the
street, a woman who picks up kids from school.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Anonymous Conjecture. Fangqing He
They can be a great role model in any industry,
they can be Banksy; they can be Satoshi [fig. 1].
References
1. Steadman, Ian. “The Mysterious Satoshi
Nakamoto.” New Statesman, 145, no. 5313
(May 2016), p. 17.
2. “Banksy Identified? Geographic Profiling
Pinpoints
Identity
of
Elusive
Artist.” Philadelphia Examiner (PA), 5 Mar.
2016.
Biography
Fangqing (Quinn) is an Interactive Media Arts
and Computer Science student from New York
University. To explore possibilities of daily life
and evoke people’s day-dreamish romance, her
works focus more on creating an illusion
between unreality and reality with interactive
installations. Her creative directions include
human-computer interaction, creative coding,
programming art, and virtual reality.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
177
Adversarial Ornament Attack
Michal Jurgielewicz
Rare Resolutions
rareresolutions@gmail.com
Abstract
This project investigates the surface modulation
in architectural and landscape design seen
through machine vision. Taking an architectural
ornament and logic behind the adversarial attack
on deep neural networks as the core elements of
project, Ornament Attack explores the
perturbations in image capture and recognition
systems and their effects
in built
environment. With the influence of socialmedia services on tourism and consumerism
trends, the current perception of both cities and
remote locations are driven by their photogenic
attributes, computational power of photo-editing
software
and
recommendation/marketing
algorithms networks. Monopoly of such is a
disadvantage to the beauty of diversity in
representation. Therefore the creation of a
constantly evolving physical and digital
ornament disrupting machine vision, parallel to
the advancement of machine learning and deep
neural networks, can not only shift our
perception of space, but also add new categories
and behaviors to it, along with a new mythology
in which machines believe.
Machine eyes
“For art to face the machines, it needs to leave
the church of humans and become fully
processual and transmittable.” [1]
Nowadays, we live in a global, highly
connected and automated world. Every day we
take an active part in an exponential flow of
media, products, ideologies, money and
technologies. That movement, equipped with
tools and platforms aided by neural networks
and machine learning, entered our everyday life
and reshaped not only our built environment, but
the way we experience it. Every time we look at
178
our smartphone or browser, our world gets
automatic auto-correction.
We wander streets, visiting places that
somehow appear, on top of our search results in
Google Maps or TripAdvisor. We communicate
with hashtags on Instagram and through our
satellite eye we travel to places, events, and
other people’s life moments while going to work
every morning. We purchase products shipped
to us through a network of ports and logistic
centers located in remote locations operated by
the same algorithms that stream our media. If
you liked that, then you will love this - tells us
our feed constantly. All this is pinned to the
precise location of our own behavioral map.
“Today’s culture as global culture is very much
the processes of de- and reterritorialization. It
should be remarked, that “territory” in the
ethnological sense, is understood as the
environment of a group that cannot itself be
objectively located, but is constituted by the
patterns of interaction through which the group
secures a certain stability and location.” [2]
However, in this world of constant
optimization and technological advancement we
do not take the central place. Companies like
“Google, Facebook or Amazon don’t have users
or customers [as we would like to think about
ourselves]. Instead they have participants under
the machine surveillance.” [3] The same
companies that provide us with platforms and
tools for our everyday life, now design cities and
“countryside” for machines that operate next to
us. Always-watching autonomous cars and
drones delivering our mail are only a prologue
to true smart-cities with homes controlled by
always listening Amazon Echo type -like
assistants and, before we ask ourselves about its
architecture, we should understand different
relationships that exist in our environment.
“Human to human, human to machine and
machine to machine - what is the real nature of
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Adversarial Ornament Attack. Michal Jurgielewicz
these?” [4] How do machines see and how do
they locate themselves in this complex network
of assumptions about them? Finally, if
architecture is not only a building, but also an
infrastructure, what are the spatial implications
of that? “What happens when the information
necessary to comprehend and operate the
environment is not immanent to that
environment, but has become decoupled from
it? When signs, directions, notifications, alerts
and all the other instructions necessary to the
fullest use of the city appear only in augmentive
overlay and, as will inevitably be the case, that
overlay is made available to some but not to
others?” [5] On the other side, what are
machines capable of seeing?
In planetary-scale operations, the Earth is
being constantly rendered, unfolding new
terrains, structures and behaviors unknown to
our sensorium, yet intertwined with the
landscape we occupy. In these conditions,
Adversarial Ornament Attack becomes the
semi-geological force shaping the environment
with traces of technological progress new to
human culture and machines. This project is a
speculative fiction approach to explore the
relationship between privacy, space and data. It
is a story about the enclaves where you cannot
take a photo because the façade patterns and
areas are invisible to autonomous-car traffic by
their architectural design. However, it is also the
story about landscapes emerging in these
conditions, perceived only by the machines,
created by the collisions in the image classifiers.
Adversarial Ornament Attack is filled with
nostalgia for the unknown, cities, places and
landscapes that exists in our imagination until
visited. It mixes craftsmanship with fabrication
and neural networks to construct new
environments for humans and for the machines
to explore, from machine eye perspective.
Adversarial Attack
Adversarial Attack on Deep Neural Network
(DNN) is a subtle modification of an image,
invisible to the human eye, which results with
misclassification of the image by DNN
interpreter. Recent findings show that these
networks are very vulnerable to adversarial
attacks, even when modified and printed images
are captured by regular smartphone camera and
tested. 3D objects with a slight change of the
texture are misinterpreted as well. A turtle is a
rifle, a baseball becomes an espresso cup.
Fig 1. Top: Example Attack, Explaining and Harnessing
Adversarial Examples, 2015, Ian J Goodfellow, Jonathon
Shlens, Christian Szegedy, image, Cornell University Library,
Bottom: Fooling Image Recognition with Adversarial Examples,
MITCSAIL Youtube Channel, 2017, Anish Athalye, Logan
Engstrom, Andrew Ilyas, Kevin Kwok, video, MIT
References
1. Mohammad Salemy, Art after the Machines,
Supercommunity: Diabolical Togetherness
Beyond Contemporary Art (London: Verso,
2017), 345.
2. Ryszard Wolny, “Gilles Deleuze and Felix
Guttari’s Concepts of Deterritorialisation and
Retteriorialisation as Globalisation of Culture,”
37.
3. Bruce Sterling, An Epic Struggle of the
Internet of Things (Moscow, Strelka Press, 2014)
8.
4. Theodore Spyropoulus, Future Culture
(London, AA Lecture Series, 2018).
5. Adam Greenfield, Radical Technologies: The
Design of Everyday Life (London: Verso, 2018),
176.
Biography
Michal Jurgielewicz is an architect, founder of
Rare Resolutions, an investigative architecture
agency currently based in Bangkok, exploring
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
179
Part III. Artistic project abstracts
possible presents through constantly changing
cultural, technological and geographical
landscapes. He took part in international
festivals, exhibitions, workshops and seminars
in Poland, Italy, The Netherlands.
180
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
The Time Machine:
a Multiscreen Generative Video Artwork
Daniel Buzzo
University of the West of England, Bristol, UK
daniel.buzzo@uwe.ac.uk
Abstract
‘The Time Machine’ is a multi-screen, highperformance, generative video art installation
based around multiple low-cost computer
platforms. Using algorithmic selection of
palindromic loops of timelapse video the work
contrasts the external, machine perception of
time with our internal, phenomenological
experience of it. The video feeds, recorded
from around the world, tick and tock backward
and forward creating a polyrhythmic, 12 screen
time-piece. The images loop back and forth on
each screen of the installation, creating a large
polyrhythmic clock of high definition, fullcolor motion. Each screen detailing a passage
of time from around the world, captured,
frozen, forward and reverse. The time-lapse
loops slowly switch, selected from over a
thousand separate pieces by generative
algorithms on each host computer. Creating a
Time Machine reflecting the world, gently
rocking back and forth with a myriad of subcadences, confronting the viewer with the
unanswerable challenge of comprehending
time.
Introduction
The work uses looping time-lapse video shot in
locations around the world to engage the
viewer with a discussion on the experience,
rhythm, repetition and flow of time. Running
across multiple monitor screens the installation
senses the audience and in response creates
palindromic video loops from high resolution
time-lapse video. The video feeds, recorded
from around the world, tick and tock backward
and forward creating a polyrhythmic, multiscreen time-piece, a video-clock locked in
receptive, slowly evolving loops. A Time
Machine reflecting the world. The backward
and forth looping of the video feeds engage the
viewer with both the reassurance and the
discomfort of seeing the world as “clock-time.”
The mechanistic vision that time is something
created and measured, governed and ruled
externally to ourselves and external to our
experience.
Fig 1. THE TIME MACHINE 2017, Daniel Buzzo multi-screen
generative video installation, Copyright the author.
The piece is a companion to the 2016 dual
screen installation “What Do We Know Of
Time When All We Can Know For Real Is
Now?” [1] [2] Exhibited at events such as
“Digital Futures,” Victoria & Albert Museum,
Computer Art Congress 5, Paris ACM MM at
OBA in Amsterdam.
The work “The Time Machine” contrasts the
external, machine perception of time with our
internal, phenomenological experience of it.
The notion of ‘clock time’ is a powerful and
extremely widely adopted metaphor for what
can be argued as the most fundamental element
of experience. [3] Time links all things we see
and perceive, from our earliest awareness of
our own physical growth and mortality to more
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
181
Part III. Artistic project abstracts
subtle realizations of the narrative procession
of events and even the concept of causality. [4]
The complexity of dissembling what this
experience means has been wrestled with for
millennia, as Augustine of Hippo asked in
400AD
"What then is time? If no one asks me, I know:
if I wish to explain it to one that asketh, I know
not." St Augustine’s Confessions, Book IX
The model of time we have in daily life treats
the ideal of “Now” as a special moment,
though this may be particular to humans. It
gives the notion of the “unfolding” of the
universe and shows time as a continuum. [5]
Human convention may dictate we travel along
this because, as Augustine of Hippo postulated
in 400AD, humans have fallible perception and
cannot see the world as it truly is. Augustine
argues, how can that which is not real (the
Future) become real (the present) and then
become unreal again (the past)? The evidence
and the balance of the philosophical argument
is for procession and flow. What Heraclitus,
and subsequently Nietszche described as all is
chaos and becoming. However, clock time, an
external mechanical, industrial notion of time,
has become dominant since the turn of the last
century. [6] The patterns and rhythms seen are
considered cyclic, oscillating and reciprocating
like the cogs and gears in a clock. Even the
movements of stars moon and planets around
us are considered as an orrery, a child’s
instructional toy to describe the universe.
This work presents this mechanical clock
fiction direct to the viewer. Folding half a
dozen different types of time together in a
multi-screen video form. Time lapse video
from different time zones shifted and collated
together, sunshine alongside moon light, dawn
next to the falling of dusk. The video loops
back and forth on each screen of the
installation, creating a large polyrhythmic clock
of high definition, full color motion. Each
screen detailing a passage of time from around
the world, captured, frozen, forward and
reverse. The time-lapse loops slowly switch,
selected from over a thousand separate pieces
by generative algorithms on each host
182
computer. Creating a slowly evolving and
changing time machine. Gently rocking back
and forth with a myriad of sub cadences,
confronting the viewer with the unanswerable
challenge of comprehending time.
References
1. Daniel Buzzo, “What Do We Know Of Time
When All We Can Know For Real Is Now,” in
Proceedings of the 5th Computer Art Congress
(Paris: Europia Press, 2016).
2. http://buzzo.com/what-do-we-know-of-timewhen-all-we-can-know-for-real-is-now/
3. N.D. Munn, “The Cultural Anthropology of
Time: A Critical Essay,” Annual Review of
Anthropology, 21, no. 1 (1992): 93–123.
4. T. Garcia, and K. Pender, “Another Order Of
Time: Towards a Variable Intensity of the
Now,” Parrhesia: A Journal Of Critical
Philosophy, 19 (2014): 1–13.
5. E. Husserl, On the Phenomenology of the
Consciousness of Internal Time (1893-1917), in
Edmund Husserl Collected Works (1991).
6. J. Martineau, Time, Capitalism and Alienation (Brill, 2015).
Biography
Dr. Daniel Buzzo is a media artist, interaction
designer, researcher and senior lecturer in
Digital Media and Creative Technologies in
UK, Netherlands and Hong Kong. He is a
founder member of the Creative Technologies
Lab at the University of the West of England
and program leader of the Master program in
Creative Technology.
His experimental interactive media art work
is intimately bound in time, temporality and
lens-based visualization. He constructs and
uses experimental cameras and data
visualization systems for urban imaging, street
photography and visualization.
He publishes and presents widely and his
work has been shown at international
exhibitions, galleries and conferences including
Digital Futures at Victoria and Albert Museum,
London; Computer Art Congress, Paris;
International Symposium of Electronic Art
(ISEA) Colombia; DataAesthetics at ACM
MultiMedia, Amsterdam; GENART XX, Italy;
and Carbon Silicon at Oriel Sycharth Gallery.
Proceedings of Art Machines: International Symposium on Computational Media Art 2019
Part IV
Review Board
183
Part IV. Review Board
Review Board of Art Machines: International Symposium on Computational Media Art
Tanya Toft Ag, City University of Hong Kong, Urban Media Art Academy
Gustavo Armagno, Universidad de la República
Javier Baliosian, Universidad de la República
Maurice Benayoun, City University of Hong Kong
Alvaro Cassinelli, University of Tokyo
Lin Chang, National Tsing-Hua University
Damien Charrieras, City University of Hong Kong
Budhaditya Chattopadhyay, American University of Beirut
John Drever, Goldsmiths, University of London
Hongbo Fu, City University of Hong Kong
Daniel Howe, City University of Hong Kong
Tobias Klein, School of Creative Media
Dietmar Koering, Arphenotype
Gene Kogan
Harald Kraemer, City University of Hong Kong
Kin Chung Kwan, City University of Hong Kong
Linda Lai, City University of Hong Kong
Tomas Laurenzo, City University of Hong Kong
Guillermo Moncecchi, Universidad de la República
Lisa So Young Park, City University of Hong Kong
Jane Prophet, University of Michigan
Anna Ridler
Alejandro Rodriguez, dogrush
Hector Rodriguez, City University of Hong Kong
Pilar Rosado Rodrigo, Universitat de Barcelona
Margaret Schedel, Stony Brook University
Jeffrey Shaw, City University of Hong Kong
Malina Siu, City University of Hong Kong
Ayoung Suh, City University of Hong Kong
Jeff Thompson, Stevens Institute of Technology
Ken Ueno, City University of Hong Kong
Guan Wang, City University of Hong Kong
Pengfei Xu, Shenzhen University
Dongming Yan, NLPR-CASIA
Yang Yeung, Chinese University of Hong Kong
Kaho Yu, City University of Hong Kong
Bo Zheng, City University of Hong Kong
184
Proceedings of Art Machines: International Symposium on Computational Media Art 2019