@transmolovesyou


Custom Twitter bot. 2017.

Commissioned by Transmission, Glasgow.

Follow @transmolovesyou on Twitter: https://twitter.com/Transmolovesyou

I have strong feelings about artist-led archives. So, when invited to make an intervention in the Transmission archive for this project, the most obvious solution was to outsource the intervention to an algorithm.

@Transmolovesyou is a Twitter bot that constructs fictitious artistic projects using real exhibition details skimmed from the Transmission archive—artists’ names, titles, etc. It’s an automated content-engine, producing hypothetical art productions. @Transmolovesyou generates a new tweet every 24 hours, and given that it contains enough data points to generate over 8 million unique combinations, it should continue generating fictional Transmission projects for a several thousand years (or for as long as Twitter continues to exist). It was created using a simple tool, SSBOT, developed by Zach Whalen, built on a Google spreadsheet.

Unlike other ways of creating a Twitter bot, it requires no knowledge of code and unlike other bots, it doesn’t engage other Twitter users in conversation, interact with live data sources or pretend to be a real person. Unlike an AI or neural network, it doesn’t learn—it simply follows a set of instructions (the algorithm). The bot doesn’t understand natural language or syntax any more than it understands intention, desire, metaphor or context. Like content spamming websites, which list multiple unrelated phrases in an attempt to climb to the top of search engine indexes, it generates a kind of contentless content: it’s nothing but noise, there is no signal.

The database is constructed from hundreds of individual data points (nouns, verbs, adjectives, etc.), skimmed from the exhibition and event listings of Transmission, from 1983 to the present day. SSBOT then provides a set of instructions for how these words should be arranged. The algorithm doesn’t act alone: a large degree of curation went into refining the data points in ways that made the sentences generated by the algorithm sound more like natural language. The dataset isn’t a comprehensive or complete database of Transmission history. Omissions and elisions were made to give @transmoloveyou a feeling of being, maybe, based on something real that once happened.