Machine vision technologies, such as object recognition, facial recognition and emotion detection, are increasingly used to turn images into information, filter it and make predictions and inferences. In the past years these technologies have made rapid advances in accuracy. The reasons for current developments are: the revival of neural networks enabling machine learning from observing data, access to massive amounts of data to train neural networks, and increased processing power.

This naturally causes excitement among innovators who are implementing these technologies. In the world of automated surveillance new machine vision techniques are developed to spot suspicious behavior without human supervision. As in all pattern recognition applications the desire is to translate images into behavioural data. Multibillion-dollar investments in object detection technologies assume that it is easy for computers to extract meaning out of images and render our bodies into biometric code, yet trial and error approaches have already revealed that machine learning is far from objective and emphasises existing biases. Human biases are embedded in machine learning systems throughout the process of assembling a dataset e.g. in categorising, labelling and cleaning training data.

What is considered suspicious in one cultural context might be normal in another, hence, developers admit, “it’s challenging to match that information to ‘suspicious’ behavior”. Nevertheless, the surveillance industry is developing “smart” cameras to detect abnormal behavior to detect and prevent unsought activities. Moreover the current covid-19 pandemic has accelerated the development of computer vision applications, tracing new forms of dubious behavior.

The work “Suspicious Behavior” shows a world of hidden human labour, which builds the foundation of how ‘intelligent’ computer vision systems interpret our actions. Through a physical home office set-up and an image labelling tutorial the user traverses into experiencing the tedious work of outsourced annotators. In an interactive tutorial for a fictional company the user is motivated and instructed to take on the task of labelling suspicious behavior. The video clips in the tutorial are taken from various open machine learning datasets for surveillance and action detection. Gradually the tutorial reveals how complex human behavior is reduced into banal categories of anomalous and normal behavior. The guidelines of what is considered suspicious behavior illustrated on a poster series and disciplined in the tutorial exercises are collected from lists of varied authorities. As the user is given a limited time to perform various labelling tasks the artwork provokes to reflect upon how easily biases and prejudices are embedded into machine vision. “Suspicious Behavior” asks if training machines to understand human behavior is actually as much about programming human behavior? What role does the ‘collective intelligence’ of micro tasking annotators play in shaping how machines detect behavior? And in which ways are the world views of developers embedded in the process of meaning making as they frame the annotation tasks?


KairUs is a collective of two artists Linda Kronman (Finland) and Andreas Zingerle (Austria). Currently based in Bergen (Norway), they explore topics such as vulnerabilities in IoT devices, corporatization of city governance in Smart Cities and citizen sensitive projects in which technology is used to reclaim control of our living environments. Their practice based research is closely intertwined with their artistic production, adopting methodologies used by anthropologists and sociologist, their artworks are often informed by archival research, participation observations and field research. Besides the artworks they publish academic research papers and open access publications to contextualize their artworks to wider discourses such as data privacy & security, activism & hacking culture, disruptive art practices, electronic waste and materiality of the internet. 

Linda Kronman is a media artist and designer. She is currently a PhD candidate at University of Bergen in the Machine Vision project. She holds a MA in New Media from Aalto University, Finland (2010). In her artistic work she explores methods of interactive and transmedial storytelling, visualizing data and creative activism. She is part of the artist duo KairUs and has been producing art together with Andreas Zingerle since 2010. Their artistic research topics includes surveillance, smart cities, IoT, cybercrime, online fraud, electronic waste and machine vision. Together they have edited the books Behind the Smart World (2016) and Internet of Other People’s Things (2018), both open access publication bringing together critical perspectives on everyday use of technology focusing on artistic research and tacit knowledge that is produced through cultures of making, hacking, and reverse engineering. She has organized several participatory workshops, taught at Woosong University, Daejong, South Korea (2017-2018) and presented her work at international exhibitions and conferences including Moscow Young Arts Biennale, Siggraph ASIA, WRO Biennial, ISEA, ELO and Ars Electronica.

Andreas Zingerle is a media artist from Austria. He received his PhD from the University of Art and Design Linz (Austria) researching topics such as Internet crime, fraud and scam, vigilante counter-movements and anti-fraud activism. He implements social engineering strategies that emerge in his research into interactive narratives, artistic installations, data visualizations and creative media competence trainings. In the last years he worked on several installations exploring a creative misuse of technology and alternative ways of Human Computer Interaction. Since 2004 he takes part in international conferences and exhibitions, among others Ars Electronica, ISEA, Aksioma, Siggraph, Japan Media Arts Festival, File, WRO Biennial.