top of page
Writer's pictureAbdelrahman Hassan

It takes a village to raise an AI

Updated: May 30, 2022


Data is ubiquitous; it is everywhere and nowhere. A ubiquity only accelerated by the onset and wide adoption of Artificial Intelligence (AI). It's hard to think of an area of human experience that AI hasn't transformed, attempted to transform, or is plotting to transform; from politics to healthcare, from knowledge production to social media, from spirituality to love. The buzzword here is “transformation”; we’re often promised a transformative property of tech. Through this write-up, I peg the very promise of transformation: What is being transformed? What is it being transformed into?


Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

In 2011, I was a student of Computer Science student at the American University in Cairo. Swept up by the promise of transformation myself, I took refuge in building tools and creating little technical utopias that reside in operating systems. My creative discomfort came when I realized the country was going through its own collective transformation. However, this time, the transformation paved its way out of the algorithmically-mandated social media and into public squares.


What ensued was an intricate dance between the technical and the social. AI was neither a purely technical object, nor was it a social construct. AI was a quasi-object, one existing outside our fabricated subject-object divide. As French philosopher Bruno Latour accentuates in his book, We've Never Been Modern:


“Quasi-objects are in between and below the two poles (…) [and] are much more social, much more fabricated, much more collective than the ‘hard’ parts of nature (…), [yet] they are much more real, nonhuman and objective than those shapeless screens on which society (…) needed to be ‘projected’. (55)"


AI needs hyper-literacy: my foyer into critical social theory

I dedicated the rest of my study, research, and professional life to understanding life between the poles of Data Science and Critical Social theory. The more AI was being posed as a solution, the more the inequalities it promoted, amplified, and spawned. In Atlas of AI : Power, Politics, and the Planetary Costs of Artificial Intelligence, Kate Crawford provides a testimonial for the unlevel ground which AI creates.


She illustrates that the promise of automation, intelligence, and personalization is matched with a reality riddled with tales of surveillance capitalism, loss of user agency, deepened social, racial, and gender inequality. Our understanding of AI systems, then, must run deeper than understanding datasets, modelling algorithms, and error/accuracy metrics; it needs to be paired with an understanding of the social fabric within which these systems interact.


AI, as a technical quasi-object, is always in flux. It is an object constantly shaped by its usage. AI requires a form of hyper-literacy: an understanding of the inner working of AI, paired with a data-literacy, engulfed in cultural and ecological literacy. It is an understanding that AI is as much a cultural object as it is a technical one. Hyper-literacy is even more important as AI becomes monumental in changing the way we perceive ourselves and the world.


In an attempt to cultivate this necessary hyper-literacy, I co-created the Atlas of Algorithmic (in)equality, the result of a 10-month interdisciplinary reading group hosted across multiple cities in the Netherlands. Alongside the Future Based Collective, I invited a host of AI practitioners, designers, and researchers to help define ways in which inequalities can be algorithmically-prompted. Together, we were able to truly map the problem-space of AI. We realized that it's both impossible and inadvisable to separate our AI systems from their colonial contexts, from our own politics of desire, and from the ecological emergencies that define our time.



A cautionary tale of techno-solutionism

The toxic separation of AI from its social context is often denoted as ‘techno-solutionism'. Popularised through Evgeny Morozov’s book, To Save Everything, Click Here. Techno-solutionism runs on the idea that technology and AI alone can be a quick escape out of complex, real-life problems. Take for example the phenomenon of cyber harassment, where more than 49% of women in Arab countries feel unsafe in online spaces. In 2019, as part of my work as a Digital Transformation Designer at Amsterdam's University of Applied Science, I was commissioned to run a project that automated responses to gender and racial violence on Twitter.


Over the span of 6 months, we built, trained, and retrained a bot that can detect abusive language, as well as respond to it. Although algorithmically, our anti-harassment Twitter bot cleared established benchmarks using an ensemble of ever-growing datasets, it was a disaster in practice. Our bot, named Bot Botanik, failed to grasp the dynamic, changing lingo of abuse and often marked as abuse things that were benign or irrelevant at best. The irony is that our bot was constantly being flagged as abusive by Twitter's own algorithm.


Lessons learnt: an intersectional approach to AI

The failure of our experiment was not only a testament to an outdated mindset of techno-solutionism; it also raised bigger questions on how we can design our AI systems so that they're fundamentally anti-harm. How do we probe our automated agents to traverse socio-technical landscapes we don’t yet understand? Both the premise and the promise of AI needed to change.


And so, I borrowed heavily from an emerging field of intersectional AI; although data is ubiquitous, it is still somehow insufficient. Mimi Ounuha's Library of Missing Data Sets and Caroline Sinder's Feminist Datasets were both efforts at filling in those gaps. Such efforts are byproducts of a power-conscious ‘data feminism’. The data feminist paradigm runs on the premise that AI systems should be in constant conversation with their users. Being quasi-objects, both AI and its users can no longer be considered passive subjects, but rather active agents. Users produce the necessary data for AI to function, while AI systems make decisions that shape the realities of the users.


Between the two actors is a network of social, political, and cultural conditions that prompt injustices and power imbalances. Such reality problematizes the way AI is currently designed and governed: one without the participation of the user, and without attempting to disrupt the vectors of power it operates in. Catherine D'Ignazio and Lauren F. Klein's ‘Data Feminism’ framework gives us insight into new modes of collecting data and designing systems that allow for dissent.


Although many individuals possess neither the lingo nor the literacy to articulate AI-induced harm, we are often exposed to those harms on a daily basis. With every data leak, Cambridge Analytica-esque scandal, and service inequality slip-up, users are losing trust in AI systems. Even if not articulated, bias is felt. Shades of techno-pessimism are often felt across popular culture and public sentiment.


In a recent 5-country study, more than 70% of citizens have said that they do not trust the decisions of an AI system. This lack of trust in AI systems, paired with techno-pessimism, I theorise, is dangerous recipe. It allows for an ongoing monopoly of tech without recoil. In the world of AI, we are all prosumers ( both producers and consumers); we all produce as much data as we consume, if not more. Our navigation of the digital world constantly shapes it. In every equation of trust, we are the other half.


A critical social theorist's response

For the past 2 years, I've tried to counter this techno-pessimism by exploring what an operationalization of critical theory would look like in AI pipelines. To decolonize a system, to liberate it from the shackles of techno-solutionism and singular world views, we must first think of decoloniality as both a deconstructive process and also a constructive one. I've worked with activists, professionals, designers, researchers, and tech evangelists to understand what this necessary decolonization entails.


In collaboration with Imagination of Things , I co-developed a Disruptive Bot Building Workshop, wherein participants cultivate both technical and cultural literacies to build AI-powered interventions. The goal was to invoke social choreographies that helped us explore and unveil a social issue, rather than attempt to solve it.


As bias and subsequent harm of AI systems evolve to be as ubiquitous as the data they run, questions are often raised around accountability. Who needs to safeguard AI processes against harmful outcomes? How do we know if a tool we built in naive solutionism turns out to be discriminatory furthering already anchored vectors of racial, gender-based, and social inequality?


The case of Microsoft's deep-learning Twitter bot TAY comes to mind: while initially harmless, the bot evolved with use into a system that amplified racist, sexist, and pro-nazi sentiments. It was a prime example of 'emergent' harm - one that was not even prompted by bad data or insufficient design. If we look at AI as lab-induced, an objective method for a subjective world, then we will come to the realization that AI-related harms are inevitable.


The community-in-the-loop approach

The more I was involved in the theorization and practice of AI, the more that one axiom of decolonial AI became apparent: governance against AI harms must be a communal effort. There are currently three common configurations for governing AI.


  • The first is a 'No AI' configuration, which is the absence of AI elements in a process, either due to human resistance or technical impossibility.

  • The second is a 'Full AI' configuration, the polar opposite setting, where automation handles a process fully, without oversight or interference from humans.

  • The third configuration is the "human-in-the-loop" approach, wherein a human needs to supervise decisions made by automated systems. This approach is often posed as the golden standard in curbing bias and harm.


However, we often found that a human-in-the-loop approach does little in addressing bigger cultural biases that AI has picked up on. What I suggest is a community-in-the-loop approach. Here, the users are pivotal to the design of a system, from its inception and maintenance, to its governance and post-deployment feedback mechanisms. In the community-in-the-loop approach, an AI has neither owners nor end-users; rather, it is a collective object of care.


In building communal, AI harm will still exist as a byproduct of the abstractions that any AI system takes liberties in. However, in a community-based approach, data subjects (formerly users) are able to report, erect, and redesign AI systems through established mechanisms. To truly decolonize AI, we must allow the whole community to participate in its’ choreography.

 

Further Reading


Kate Crawford and Vladan Joler, “Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources,” AI Now Institute and Share Lab, (2018) https://anatomyof.ai


Johann Jakob Häußermann & Christoph Lütge, "Community-in-the-loop: towards pluralistic value creation in AI, or—why AI needs business ethics," AI Ethics (2021) https://doi.org/10.1007/s43681-021-00047-


Catherine D'Ignazio and Lauren F. Klein, "Data Feminism," The MIT Press (2020) https://data-feminism.mitpress.mit.edu/pub/f8vw7hh


Syed Mustafa Ali, "A brief introduction to decolonial computing," XRDS 22, 4 (2016) https://doi.org/10.1145/2930886


Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan and Michelle Bao, "The Values Encoded in Machine Learning Research," arXiv (2021) https://arxiv.org/abs/2106.15590

332 views0 comments

Комментарии


bottom of page