Lacework (2020);
Comissioned for The Photographer's Gallery of London, Lacework looks at the conditions of production of artificial neural networks– the materials of daily human life that go into their training and construction, as well the hidden labor of tagging, organizing, and compressing these materials.
Lacework takes as its source MIT’s Moments in Time Dataset. Moments in Time was developed in 2018 to recognise and understand different actions in video by automated systems. It contains one million 3 second videos which were scraped (generally without consent) from websites like YouTube and Tumblr, and then tagged with a single verb like asking, resting, snowing or praying.
Each of the 339 verb tags contains thousands of videos ranging from the very personal to the widely recognisable. For instance, the 'Drumming' tag includes a high school marching band, an excerpt of Animal from The Muppets, a performer in a subway station and a YouTube tutorial, among others. 'Flying' includes a view from the window of an airplane, a bee circling a flower, a satellite rotating above the earth, a flock of flamingos and a skydiver yelling something we cannot hear.
Using algorithms that stretch time and add details to images, Lacework presents a hallucinatory slow-motion river of these moments, as captured in amber; flowing from one to another into a cascade of gradual, unfolding details of the everyday actions that form the collection.
Lacework can be experienced alongside its companion essay, On Lacework: watching an entire machine-learning dataset.