Nieuwe Instituut
Nieuwe Instituut

Sonneveld House

Thursday Night Live! Decolonising Design II with Ramon Amaro

An essay written by Ramon Amaro for his talk at Het Nieuwe Instituut on the Thursday Night Live! series about Decolonising Design.

By

Ramon Amaro,

20 February 2017

Ramon Amaro

According to data critic Rob Kitchin, we have entered a new era of data. He describes this era as a data revolution that promises the possibilities of equality and objectivity. He conceives of it as a revolution where decisions are made based on data that extend beyond the restrictions of subjective boundaries, as well as the cultural circumstances that produce social inequalities, violences and relationships of power (1). A significant portion of the data revolution, as Kitchin describes, is concerned with the tensions that emerge when data are used to derive social, political and economic meaning without a more complete consideration of the cultural properties inherent in data, and the generalisation of groups and individuals. As such, Kitchin cautions against the view of data-driven information as having more value than other forms of knowledge.

Studies have shown that commonly used self-learning machines can produce results that reveal racist and other discriminatory practices, even if the algorithm is not designed with this intention (Kitchin, 2014; Zuberi, 2001).

Kitchin borrows from Foucault in considering the data revolution in terms of a quantified method of governance with the capacity to naturalise dynamic and contingent social events. This, or what Foucault describes as a discursive regime, is based on 'a set of interlocking discourses that justifies and sustains new developments and naturalises and reproduces their use'(2). Foucault argues that the regime works by producing certain 'atmospheres' in which social patterns become naturalised and de- contextualised in the process of administration, or what in contemporary data practices would be considered social 'insights'. In other words, as data works through our social, economic and political worlds, a logics of governance emerges that takes for granted the contextual circumstances at work in the creation of dynamic cultural relationships. Left is data in direct conversation with government decision making outside of a more contextual state of affairs. As such, key questions emerge as to the role of citizenship in discursive regimes based on data? Not to mention on what grounds the relationship between the human body and data can be worked through in critical spaces of art, design, politics and aesthetics?

Still, despite its capacity to generate new modes of governance and political understandings, data are employed for a number of beneficial processes that can include more efficient administrative operations, managements, explorations and insights into social practice. By simplifying problems, data also enables researchers to make great strides in the understanding of social relationships. For instance, through complex mathematics machines learn how to read car number plates, identify fingerprints, retinas and even faces. As such, computers can also learn our shopping preferences based on patterns of online (and offline) purchases. They can predict products we might enjoy, which neighbourhoods are suitable for home buyers and which financial transactions are ours or those of identity thieves. We are arguably a world that not only learns better, but is designed to learn for greater social efficiency.

Once captured, however, data typically remains passive until either mined towards the aim of a specific objective or put into action to reveal, even unwittingly, specific associations with other discreet data. Not to mention, data is also where individuals are located within mining, at the intersection of transparency and concealment. Whereby discoveries, or categorisations in these scenarios, are likely to produce strong patterns and statistical relationships, they are also prone to generalisations or predictions about what new patterns might emerge in the future(3). In terms of generalisation and prediction, data can then form baselines from which a mathematical relationship (including any statistical deviations) are expressed, and where preemption becomes a mechanism by which relationships can be amplified into otherwise undetected associations. In this way, the profiles and models produced both subsume fragments of culture and regulate experiences towards preempted outcomes. If data are the building blocks of social profiles then the knowledges that emerge accumulate into a set of performance expectations based on the data's assumed social, political or economic value(4).

Of immediate concern in the study of data politics is how this value is converted into inequitable facts, evidences and information, and how these networks of knowledge articulate themselves into modes of power that directly impact the life capacities of some individuals over others. Data can thus hold serious consequences depending on the circumstances in which they are employed. For instance, speaking on national security Raley writes that 'data is in this respect performative: the composition of flecks and bits of data into a profile of a terror suspect, the re-grounding of abstract data in the targeting of an actual life, will have the effect of producing that life, that body, as a terror suspect'(5). As Raley highlights, an increased reliance on performance-based metrics means that less emphasis is given to data that may racialise the body and reproduce racisms and social bias(6). In this way, Langley, remarks that machine learning research has already become 'mindless comparisons' that do little to identify sources of power (7).

Nonetheless, James, et. al. argue that despite its fallibility, data-driven learning is both relevant and useful when applied to practical applications. Learning aides both industry and social science research by prompting more informed empirical decision-making(8). Others argue that this current approach has produced a computational agility and sophistication, as is demonstrated in smart phone personal assistant technologies like Apple's Siri or Google's Go platforms. Machine learning algorithms must therefore adapt to a range of sensory information, as well as derive 'smart' decisions based on variable inputs such as geo-positioning data, typing pattern recognition, and other more passive aggregations of user data.

In other words, machines may not yet comprehend the dynamics of racism, but they can be racist. For instance, Latanya Sweeney (2013) indicates that autonomous advertising algorithms like Google Adsense can produce online ads that suggest people with so-called 'Black-sounding names' have criminal backgrounds, even if they do not.

This can have far reaching consequences for relationships that depend on the profiles of groups and individuals to either grant or restrict access to certain social and political spaces. Here I point to police, security forces and judicial bodies that are showing an increased interest in Facebook status updates and other unstructured social data as evidences of suspected criminal activity, including judgement of innocence or guilt. Employers are also turning to data and social profiles to assess the employability of candidates. These are done with little consideration for the accuracy or understanding of the data considered or the algorithms which process them. A Google search of my own name reveals a basketball player, a radio personality and others, yet it is left to the searcher to determine which is actually me. The algorithm, however, maintains a position of objectivity, taking little if any accountability for the consequences of the results(9). Questions must be asked as to how accountability is allocated should the results lead to disadvantages imposed upon the user ; or under what conditions the user is free to navigate through digital experiences without leaving behind traces of their behaviour?

The data is important since it merges with the lived experience of technological citizenship. In this way, data provide more than just key inputs into the production of social knowledges and performance. They challenge how algorithmic processes resist the capacity for differences in self-identity that emerge when groups or individuals gather in specific locations of interest(10).

Towards a Decolonised Design

Concerning what I have just described -- namely in running with the theme of this event: decolonising design -- I would like to shift our focus to the specific relationship between data, design and the distribution of power. The theme itself implies an already implicit (or explicit) awareness of potential discriminations in data-driven design practice. And although the field is very broad, most digital ecologies rely on either the performance of mathematics and data in the form of software, or the transmission of data and as output of information. If we are to take data at face value, then as Lisa Gitelman states (and I have described above), there is no such thing as 'raw data'(11). And if this is such, then data inherently holds the capacity for colonisation, and in many instances is already colonised.

However, before we go further and discuss ways data colonisation can be resisted, it is beneficial to describe what 'colonisation' means in terms of the quantified and digital self. For this I turn to Frantz Fanon's decolonial project. In The Wretched of the Earth, Fanon describes colonisation as a methodology for social control, one that holds the life of a people in its grip by emptying all forms of content. He writes that colonisation is 'a kind of perverted logic, it turns to the past of the oppressed people, and distorts, disfigures, and destroys it'(12). He goes on in his essay L'An Cinq, de la Révolution Algérienne (published in English as A Dying Colonialism), to describe colonisation as a process that thus renders life devoid of meaning. Colonisation thus implies that the process of decolonisation is the point of re-establishing life, or becoming a part of life in the governing of oneself and the construction of a way of living. For instance, during the Algerian war, radio communications were a primary tool for French propaganda. Citing French communications technology, or what he calls the 'technical instrument of the radio receiver', Fanon argues that colonialism was able to 'shut its eyes to the real facts of the problem'(13) and instead construct a nationalist narrative of the war. As result, the Algerian resistance group FLN had boycotted French public radio. However, by the end of 1956, the FLN had shifted their position, and instead appropriated the technology to transmit their own narratives of the revolution through a radio programme called 'The Voice of Fighting Algeria'. Fanon writes: 'In making of the radio a primary means of resisting the increasingly overwhelming psychological and military pressures of the occupant, Algerian society made an autonomous decision to embrace the new technique and thus tune itself in on the new signaling systems brought into being by the Revolution'(14).

What is striking about Fanon's interpretation of this shift in practice is the expansion of the technological apparatus as more than just a tool of ideology. Fanon believed in the capacity of technology as a mode of decolonisation and resistance. He writes:

"Incorporated under these conditions into the life of the nation, the radio will have an exceptional importance in the country's building phase. After the war a disparity between the people and what is intended to speak for them will no longer be possible. The revolutionary instruction on the struggle for liberation must normally be replaced by a revolutionary instruction on the building of the nation. The fruitful use that can be made of the radio can well be imagined"(15).

As technologies become more sophisticated, the relationship between colonisation and technology takes on new meaning. Today, we can see how digital technologies that classify and process phenotypic information, such as Adobe Photoshop®, Instagram and other image processing and filtering softwares, might ensure that racialised bodies are represented as objects and in doing so generalise individuals within racially-identified groups. For Paul Gilroy, the meaning and status of racial categorisation is becoming more uncertain within the flattening out of cultural differences by commercial markets. As Gilroy remarks: 'The call of racial being has been weakened by & the idea that the body is nothing more than an incidental moment in the transmission of code and information, by its openness to the new imaging technologies, and by the loss of mortality as a horizon against which life is to be lived'(16).

Nonetheless, data politics are vulnerable to the reification of race as a socially constructed concept, but become necessary so that the implications of race remain at the forefront of technologies that regulate associations between groups and individuals(17). However, in following Gilroy, these conversations extend beyond observation of technologies and into a direct confrontation with the logics and theories which force us to rethink the relationship between race-based ideologies and digital practices.

Here we can think of the ways anthropomorphism is used in design strategy, and the ways computer interfaces are designed to include only designated characteristics of people. These characteristics are treated as if separate from the material existence, social circumstances or life conditions of human subjects and raise ethical questions about the meaning of race and gender in the digital arena. Miriam E. Sweeney, in her analysis of Microsoft's former search engine interface 'Ms. Dewey' (2006 - 2009), explores how interfaces can reveal specific assumptions about race, gender and technology in search engines(18). 'Ms. Dewey' was a Flash interface on Microsoft's Windows Live Search platform. 'Ms. Dewey' was designed to respond to user queries through approximately six hundred discrete vignettes. The bot was not only modelled after a woman of colour, but was highly sexualised. Specific easter eggs showed 'Ms Dewey' holding a whip and another a condom and a helmet. Sweeney finds that race and gender function here as crucial elements of interface design and serve more explicitly as ideological rather than instrumental measures of search function. She writes:

The key supposition in this design strategy is that the user will judge the character of an anthropomorphic computer agent based on the same criteria that they use to judge humans in daily interaction& This is true in the very base constructions of the category of "human," for instance. At different points in history, "humanity" and "humanness" have been denied to people based on their gender, race, religion, ethnicity, and sexuality, and have been used to justify atrocities such as slavery, genocide, and rape.

Latanya Sweeney has also shown how the practice of administering associations of skin colour and data with the life capacities of Black individuals remain in conversation today in processes of employment seeking, university applications, and many other inequitable encounters. Sweeney found that online ads suggesting arrest records appeared more often in searches for 'Black-sounding' names than in searches for 'white-sounding' names, even when the person had no arrest record(19). The obscurity of the measureables, 'Black-sounding' and 'white-sounding' only illuminate the importance of Sweeney's research question: Can racial groups be adversely affected by a digital application assumed to be universal and objective? Sweeney questions suspected patterns of ad delivery in real-world searches.The findings question the place of social bias and the position of race as knowledge production. Although Sweeney's experiment does not point to any explicit intention by Google as the mediator of data exchange, it recognises the pervasiveness of social bias in the digital.The difficulty here is that most machine learning study seeks to improve performance without altering existing frameworks of the relation between data, calculation and human action(20).

These conversations bring into question the territorialisation of the Black and racialised body by means of social difference, and the place of the digital within racialised and regulatory practices. It is through the relation or interaction with data that the production of the self and the digital are placed in reference to the persistent values of the epidermal gaze.

Sweeney's results are not alone in their implications. Other studies indicate that pricing and other automated digital commerce systems display higher prices in geographical areas known to have large concentrations of Black and minority individuals. Analysis of Google Maps indicated that searches for racist expressions, such as the 'n-word', directed users to former US president Barack Obama's residence at The White House. More recently, it was discovered that photo service Flickr's automated tagging service had mislabeled a Black woman under the category of 'monkeys', a racial stereotype often associated with Black people. So while machine themselves are not racist, then can reproduce and represent racisms within human interaction.

Although these results are striking, there is little data to indicate how the machines determined these outcomes. To what degree are self-learning machines influenced by human factors, like racism and racial profiling? To what extent do machines learn to not just replicate social discriminations, but make decisions based on race-based factors? As machines become more sophisticated at decision making, will they view the value of some individuals over others as a matter of operational efficiency? And, if so, what value systems might be placed on the outcomes? Although these questions are of immediate importance, current research lacks sufficient insight into these concerns.

However, as Fanon and others have argued, these modes of interaction can also function as modes of resistance. Examples can be seen in attempts to confuse and overwhelm pervasive surveillance technologies. For instance, Hyperface's anti-surveillance clothing by Berlin-based artist and technologist Adam Harvey that hide the wearer from facial recognition(21). The clothing is designed with patterns that appear to be eyes, mouths and other features to overwhelm facial recognition systems by presenting them with thousands of false positives. Harvey's previous project, CV Dazzle, created makeup and hairstyles that prompted machines to misrecognise actual faces. There is also Sarif et. al.'s 'physically realizable' and 'inconspicuous' eyeglasses meant to confuse facial recognition softwares. By changing the ways machines interpret captured pixels, the researchers write that: 'When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual'(22).

Other interventions are made by artists Keith + Mendi Obadike in their conceptual work 'The Interaction of Coloreds'. The artists draw on designer Josef Albers's Interaction of Color (1963), a design text which features ten representative color studies chosen by Albers. Albers elaborates on the displacement of colour selection in the subjective by stating that our preferences for certain colours is governed by an internal process. Re-conceptualising Alber's claim, Keith + Mendi Obadike explicitly note that practices of administering so-called 'color tests' originated on slave plantations as a basis for social selection. Brown paper bags were used to classify which slaves were suitable for certain work based on skin colour. Those with skin darker than the bag were sent to the fields. Those with lighter skin were placed indoors. The artists conceive that the practice of administering associations of skin colour with social production remains in conversation today in processes of employment seeking and university applications.

I'd like to point out that these examples remain within the field of representation, where a problematic is generated, as Deleuze would argue, when a logic of discreetness (in these terms, data) remain indifferent to their relational contexts. This type of design 'thinking' is vulnerable to fixing relationships into finite understandings and equivocal conceptions of being(23). To critically understand the interaction between humans and data is an attempt to reveal racisms and other violences, as well as return to the relationships that comprise of their social circumstances. Ultimately, what I wish to articulate is that data-driven technologies do not just represent identity, they shift and supplant logics of governance to recialise and re-inser t power over certain individuals.

It would do us well to consider what a process of decolonisation might mean in this context. To do so would first necessitate a critical look at our reliance on reason, generalisation and calculation as a means of understanding the world. However, it would also require alternative frameworks of resistance that mark new methods of governance and power. In as much as data is a continuation of a historical logics of practice, as Luciana Parisi asser ts, attention must turn to the 'use-meaning' of data in order to fully articulate the meanings that are embedded in collective practices(24). To do so is to advocate for a more expansive distribution of knowledge through a logics that can detangle reasoning from its naturalised forms. In other words, we must ask ourselves what hypothetical conditions are necessary to divorce ourselves form the primacy of data and instead enter into a design space that can unravel tendencies towards generalisation and preemption in favour of a new, more contingent relationship with the digital -- a relationship that can decolonise and bring life back into the autonomy of being.

Annotations

1 Rob Kitchin, _The Data Revolution. _

2 Rob Kitchin, _The Data Revolution, _113.

3 Ian H. Witten, et. al., Data mining: practical machine learning tools and techniques.

4 ibid.

5 Rob Kitchin,_ The Data Revolution,_ 179.

6 P Langley, "Toward a unified science of machine learning," in Machine Learning, vol. 3, 253-259.

7 P Langley, "Toward a unified science of machine learning," in Machine Learning, vol. 3, 278.

8 Gareth James, et. al., An Introduction to Statistical Learning: With Applications in R.

9 Susan Schuppli,"Deadly Algorithms," at AUTONOMY / AUTOMATION. Bartlett School of Architecture, UCL.

10 Fox D. Harrell, "Algebra of Identity: Skin of Wind, Skin of Streams, Skin of Shadows, Skin of Vapor."

11 Lisa Gitelman and Virginia Jackson, "Introduction" in 'Raw Data' Is an Oxymoron.

12 Frantz Fanon, The Wretched of the Earth, 210.

13 Frantz Fanon, _A Dying Colonialism, _32.

14 Frantz Fanon, A Dying Colonialism, 84.

15 Frantz Fanon, A Dying Colonialism, 97.

16 Paul Gilroy, Against race: Imagining political culture beyond the color line, 36.

17 Paul Gilroy, Against race: Imagining political culture beyond the color line.

18 Miriam Sweeney, "Not just a pretty (inter)face: A critical analysis of Microsoft's 'Ms. Dewey'."

19 Latanya Sweeney, "Discrimination in Online Ad Delivery," in Queue, vol. 11, no. 3.

20 Thomas G. Dietterich, "Learning at the Knowledge Level."

21Anti-surveillance clothing aims to hide wearers from facial recognition, available at [https://www.theguardian.com/technology/2017/jan/ 04/anti-surveillance-clothing-facial-recognition-hyperface ](https://www.theguardian.com/technology/2017/jan/ 04/anti-surveillance-clothing-facial-recognition-hyperface)

22 Mahmood Sharif, et. al., "Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition," accessed at https:// www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

23 Henry Somers-Hall, Hegel, Deleuze, and the Critique of Representation, 3.

24 Luciana Parisi, "Automated Design and Computational Reason."

Nieuwsbrief

Ontvang als eerste uitnodigingen voor onze events en blijf op de hoogte van komende tentoonstellingen.