Criticial Bots
15 June 2017 19:30 - 16 June 2017 17:00
On 15 and 16 June 2017 The Bot Club (humans welcome): Critical Bots event was held at Het Nieuwe Instituut. During the Thursday Night Live! The Bot Club: Critical Bots event, Julian Oliver and Anne Helmond gave lectures and the next day a _Hackathon _was organized. Criticial Bots was the second edition of Bots Club (humans welcome), a recurring program on algorithmic culture by Het Nieuwe Instituut.
Algorithmic culture is a label for the complex, sometimes disturbing, sometimes funny state of cutural formation that humans have entered by producing algorithms that operate in the web and in social media on such a massive scale and on such an intrinsic level of society, that they in turn play a key role in producing culture. The term bots is defined in different ways in different contexts. In this program it is used quite generously: on May 11 we organised, (together with FIBER Festival) a masterclass on bot making with artist Darius Kazemi. His preferred definition was: 'algorithms with a face'. Anne Helmond, first speaker of the Bot Club on Critical Bots, added the element of unsupervised operation. Bots, then, would be algorithms with a face that operate autonomously.
Automated opinion forming (introduction)
The program of Critical Bots wished to investigate the possibility of bots entering the domain of (political) opinion forming in a critical manner. It is clear that this domain has become a brutal algorithmic battle field. To give three short examples:
On a very common and daily level the content selectors that shape the timelines of Facebook users are having a significant effect on the sort of messages, ads and stories Facebook users encounter. These systems favor content that users tend click on, because that allows for more precise and successful ad-placement, which generates Facebook's revenue. In this way the filter bubbles humans tend to surround themselves with in their discursive existence, are reinforced.
Russia appears to be maintaining an army of what has been nicknamed Putinbots. Some of them are human disinformation trolls but most are fully automated hate-bots that selectively and massively spread false stories with the intent to undermine public discourse on various online platforms during election campaigns in the US and in Europe.
And then we know of political so-called megaphoning systems like AgendaofEvil, basically a website, twitter account and retweet network that quickly and effectively spreads anti-islam spam. It is safe to say that the automated opinion sphere is dominated by bots from far- and alt-right ends of the political spectrum. Rather than applying the same mechanisms to spread another type of political message (and then only entering a competition for attention) in Critical Bots we want to ask the question: Can bots or other algorithmic agents also play a role in critical opinion forming? In other words: can political or cultural criticism also get an algorithmic form? Or does it maybe have such a form already, somewhere?
The talks
Critical Bots consisted of two parts: a Thursday Night Live program with speakers and a more practice oriented program the next day.
Critical Making
Florian Cramer of WdkA opened the program with a brief introduction to the position paper on Critical Making (summary to be found here) which in certain important aspects was inspired by the critical engineering practice of Julian Oliver speaking later at the evening. Het Nieuwe Instituut is part of a research consortium that aims to reiterate and elaborate on the practice of Critical Making.
Platform-specific bots and their critical potential
Researcher in digital culture Dr. Anne Helmond of the University of Amsterdam then outlined a brief history of bots as the internet's 'first indigenous species' She described a range of current systems and platforms like Twitter and Wikipedia as ecologies formed by the interplay between humans and bots. She then explored various typologies of critical bots. Transparency bots are automated agents that use social media to draw attention to the behavior of particular actors. Transparency bots report the behavior of targeted institutions to provide enhanced surveillance by the public. Anti-harassments bots (or blockbots)support the curation of a shared blocklist of accounts. Subscribers to a blockbot will not receive any notifications or messages from accounts on the blocklist. Popperbots and Bridgerbots operate on the analysis of online discursive networks in Twitter. Popperbots infiltrates strongly perspective-confirming subnetworks (echo chambers) on Twitter, and inject alternative views. Bridgerbots create links between two ore more segregated networks of Twitter users. Helmond concluded with remarks on the crucial roles that bots play within online platforms. Bots do not only inhabit the platform but are active co-constructors of the environment and give meaning to it.
Critical Automatons by Julian Oliver
Media artist Julian Oliver prefers to understand his works as automatons. The term bots suggests agents that exists online, often within online platforms. Olivers' works are digital and networked, but operate outside platforms and sometimes even next to the internet. They offer critical interventions that question the assumptions of users and programmers equally towards the trustworthy functioning of digital infrastructure. Oliver is co-author of the Critical Engineering Manifesto which offers a set of guiding principles for his practice. His works are often embodied as inconspicious pieces of hardware, like an electrical adaptor or an HP office printer. Next to their regularly functioning parts, they contain technologies that pry open the black boxes of seamlessly functioning ICT infrastructure. Newstweek ('fixing the facts') (operating from what looks like a electrical adapter plug) locally breaks into information streams and allows users to edit the online stories that news consumers connected to a public wifi station ('hotspot') see appearing on their screens.
Stealth Cell Tower (disguised as a office printer) locally captures data traffic to cell phones, and sends text messages that appear as though they are form someone that knows the recipient. All communication between the Stealth Cell Tower and the recipient is immediately printed out. Every now and again the printer also randomly calls phones in the environment and on answering, Stevie Wonder's 1984 classic hit_ I Just Called To Say I Love You is heard_.
Julian Olivers automatons do not intervene on the level of the digital platform (within the so-called application layer of the internet protocol stack) but at the much more basic level of the link layer or sometimes the internet layer. His interventions reveal the unquestioning dependency of web users on leaky, often manipulative, and faulty technical infrastructure
A Hackathon with case studies
Next to providing evocative, insightful as well as disturbing perspectives on current algorithmic culture, these talks also functioned as introduction to the more practice based hackathon on Critical Bots the day after. Here three presented case studies would frame more articulated technical questions towards actually making new critical bots.
Cristina Cochior
Bot-custodian and media researcher Cristina Cochior presented a study of the bot culture of Wikipedia, and particularly on the work of _Cluebot NG_. Bots on wikipedia are intentionally made invisible, thereby playing into the myth of wikipedia as primarily a project of combined human intelligence. Cluebot NG is the most active and important and anti-vandalism bot on Wikipedia. It restores ruined pages and filters offensive language. It has higher rights than many humans that work on Wikipedia. Cluebot NG is what is known as a janitor bot, performing the crucial (but for humans very boring and way to massive) labour of maintenance. It is not actually intervening in discourse in a critical manner, (unless a critical attitude towards vandalism would count) but it facilitates a certain quality of discourse by weeding out the noise and attempts at destruction. Its vandalism detection mechanism uses machine learning algorithms, trained on a dataset that is put together by human vandalism fighters.
Sarah Eskens
Legal scholar Sarah Eskers (UvA) presented her research in the functioning of recommender systems in the context of news feeds, in the context of the larger question of how to stimulate democratic debate. She put up the hard question of how to capture diversity in an automated way. First of all: there is the issue of whether to (try to) optimize for diversity in sources of news stories, or for diversity in types of (political) content or to optimize for exposure diversity: making sure users actually get exposed to diverse content and or sources. Underpinning this is the more fundamental question of what is the (often implicit) model of democracy that is worked with. A pluralist idea? An agonistic model? A deliberative, concensus-driven process?
Making things
Jullian Oliver then went more deeply into the hardware and code of possible critical interventionist works. Three ideas were developed, for two of which proofs of (aspects of the) concept were delivered. One system (inspired by Julian Olivers work) would be able to spoof weather patterns and would locally output towards the famous Buienradar. Fake weather would again critically question the dependency on digital infrastructure, but would also be able to spark debate on climate change. A second concept targeted flexible pricing systems that apply user data on location, cookies, browser history and such to calculate per user the highest possible price he or she is willing to pay for a certain commodity. The system would generate automatically a (fake) consumer identity that would be able to get the lowest possible price for online purchases. A third project tried to tackle the source diversity question in online debates.
A more elaborate report on the workshop with links to resources will be made available on demand soon. The combination of theoretical and practical discourse that we explored in this event closely followed the spirit of Critical Making and proved highly constructive. We will therefor continue in this set-up. The next Bot Club (working title Decolonizing Bots) will be held 26 October and will again be followed by a one day workshop.