2020’s top 100 journal articlesArtificial intelligenceDecolonising knowledge & KMIn the news

Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence [Top 100 research & commentary of 2020]

This article is part 11 (and the final part) of a series reviewing selected papers and associated commentary from Altmetric’s list of the top 100 most discussed and shared research and commentary of 2020.

In the #87 article1 in Altmetric’s top 100 list for 2020, Shakir Mohamed, Marie-Therese Png, and William Isaac aim to provide a perspective on the importance of a critical science approach, and in particular of decolonial thinking, in understanding and shaping ongoing advances in artificial intelligence (AI).

Algorithmic coloniality

Mohamed, Png, and Isaac advise that, like physical spaces, digital structures can become sites of extraction and exploitation, and thus the sites of coloniality. The coloniality of power can be observed in digital structures in the form of socio-cultural imaginations, knowledge systems, and ways of developing and using technology which are based on systems, institutions, and values which persist from the past and remain unquestioned in the present. As such, emerging technologies like AI are directly subject to coloniality.

To describe this “algorithmic coloniality,” Mohamed, Png, and Isaac introduce a taxonomy of decolonial foresight: algorithmic oppression, algorithmic exploitation, and algorithmic dispossession. Within these forms of decolonial foresight, they present a range of use cases that they identify as sites of coloniality: algorithmic decision systems, ghost work, beta-testing, national policies, and international social development.

Algorithmic oppression

Algorithmic oppression extends the unjust subordination of one social group and the privileging of another – maintained by a “complex network of social restrictions” ranging from social norms, laws, institutional rules, implicit biases, and stereotypes – through automated, data-driven, and predictive systems.

Site 1: Algorithmic decision systems. Predictive systems leveraging AI have led to the formation of new types of policing and surveillance and access to government services, and reshaped conceptions of identity and speech in the digital age. Such systems were developed with the ostensible aim of providing decision-support tools that are evidence-driven, unbiased and consistent. Yet, evidence of how these tools are deployed shows a reality that is often the opposite. Instead, these systems risk entrenching historical injustice and amplify social biases in the data used to develop them. Evidence of such instances are abundant.

Algorithmic exploitation

Algorithmic exploitation considers the ways in which institutional actors and industries that surround algorithmic tools take advantage of (often already marginalised) people by unfair or unethical means, for the asymmetrical benefit of these industries. The following examples examine colonial continuities in labour practices and scientific experimentation in the context of algorithmic industries.

Site 2: Ghost workers. Many of the recent successes in AI are possible only when the large volumes of data needed are annotated by human experts to expose the common sense elements that make the data useful for a chosen task. The people who do this labelling for a living, the so-called “ghost workers,” do this work in remote settings, distributed across the world using online annotation platforms or within dedicated annotation companies. In extreme cases, the labelling is done by prisoners and the economically vulnerable, in geographies with limited labour laws. This is a complicated scenario. On one hand such distributed work enables economic development, flexibility in working and new forms of rehabilitation. On the other, it establishes a form of knowledge and labour extraction, paid at very low rates, and with little consideration for working conditions, support systems, and safeties.
Site 3: Beta-testing. There is a long and well-documented history on the exploitation of marginalised populations for the purpose of scientific and technological progress. It is with this historic lens that the practice of beta-testing is examined. Beta-testing is the testing and fine-tuning of early versions of software systems to help identify issues in their usage in settings with real users and use cases. In this testing, there are several clearly exploitative situations, where organisations use countries outside of their own as testing grounds, specifically because they lack pre-existing safeguards and regulations around data and its use, or because the mode of testing would violate laws in their home countries. This phenomenon is known as ethics dumping: the export of harms and unethical research practices by companies to marginalised and vulnerable populations or to low- and middle-income countries, and which often aligns with the old divisions of colonialism. As an example, Cambridge Analytica (CA) elected to beta-test and develop algorithmic tools for the 2017 Kenyan and 2015 Nigerian elections, with the intention to later deploy these tools in US and UK elections. Kenya and Nigeria were chosen in part due to the weaker data protection laws compared to CA’s base of operations in the United Kingdom – a clear example of ethics dumping.

Algorithmic dispossession

Algorithmic dispossession describes how, in the growing digital economy, certain regulatory policies result in a centralisation of power, assets, or rights in the hands of a minority and the deprivation of power, assets, or rights from a disempowered majority. The following examples examine this process in the context of international AI governance (policy and ethics) standards, and AI for international social development.

Site 4: National policies and AI governance. Power imbalances within the global AI governance discourse encompass issues of data inequality and data infrastructure sovereignty, but also extend beyond this. There are questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralisation of power and capital through mechanisms of dispossession. That is, we must be mindful of “who sits at the table, what questions and concerns are sidelined and what power asymmetries are shaping the terms of debate.” For example, a review of the global landscape of AI ethics guidelines pointed out the “under-representation of geographic areas such as Africa, South and Central America and Central Asia” in the AI ethics debate. The review observes a power imbalance wherein “more economically developed countries are shaping this debate more than others, which raises concerns about neglecting local knowledge, cultural pluralism, and the demands of global fairness.”
Site 5: International social development. Much of the current policy discourse surrounding AI in developing countries is in economic and social development, where advanced technologies are put forward as solutions for complex developmental scenarios, represented by the growing areas of AI for Good and AI for the Sustainable Development Goals (AI4SDGs). In this discourse, there is a need to expand the currently limited and vague definitions within the computer sciences of what “social good” means. Where a root cause of failure of developmental projects lies in default attitudes of paternalism, technological solutionism, and predatory inclusion, decolonial thinking shifts the view towards systems that instead promote active and engaged political community. This implies a shift towards the design and deployment of AI systems that is driven by the the agency, self-confidence and self-ownership of the communities they work for, e.g. adopting co-development strategies for algorithmic interventions alongside the communities they are deployed in.

Tactics for a decolonial AI

In consideration of the five sites of coloniality above, Mohamed, Png, and Isaac propose sets of tactics for the future development of AI, which they believe open many areas for further research and action. They advise that tactics do not lead to a conclusive solution or method, but instead to the contingent and collaborative construction of other narratives.

Mohamed, Png, and Isaac submit three sets of tactics: supporting a critical technical practice of AI, establishing reciprocal engagements and reverse learning, and the renewal of affective and political community.

Towards a critical technical practice of AI

Critical technical practices (CTPs) take a middle ground between the technical work of developing new AI algorithms and the reflexive work of criticism that uncovers hidden assumptions and alternative ways of working. CTP has been widely influential, having found an important place in human-computer interactions (HCI) and design. By infusing CTP with decoloniality, a productive pressure can be applied to technical work, moving beyond good-conscience design and impact assessments that are undertaken as secondary tasks, to a way of working that continuously generates provocative questions and assessments of the politically situated nature of AI.

Mohamed, Png, and Isaac explore five topics constituting such a practice:

  • Fairness. Efforts at fairness can still lead to discriminatory or unethical outcomes for marginalised groups, depending on the underlying dynamics of power; because definitions of fairness are often a function of political and social factors. There is a need to question who is protected by mainstream notions of fairness, and to understand the exclusion of certain groups as continuities and legacies of colonialism embedded in modern structures of power, control, and hegemony. This speaks to a critical practice whose recent efforts, in response, have proposed fairness metrics that attempt to use causality or interactivity to integrate more contextual awareness of human conceptions of fairness.
  • Safety. The area of technical AI safety is concerned with the design of AI systems that are safe and appropriately align with human values. The philosophical question of value alignment arises, identifying the ways in which the implicit values learnt by AI systems can instead be aligned with those of their human users. There is a need to question whose values and goals are represented, and who is empowered to articulate and embed these values. Of importance here is the need to integrate discussions of social safety alongside questions of technical safety.
  • Diversity. With a critical lens, efforts towards greater equity, diversity, and inclusion (EDI) in the fields of science and technology are transformed from the prevailing discourse that focuses on the business case of building more effective teams or as being a moral imperative, into diversity as a critical practice through which issues of homogenisation, power, values, and cultural colonialism are directly confronted. Such diversity changes the way teams and organisations think at a fundamental level, allowing for more intersectional approaches to problem-solving to be taken.
  • Policy. There is growing traction in AI governance in developing countries to encourage localised AI development, or in structuring protective mechanisms against exploitative or extractive data practices. Although there are clear benefits to such initiatives, international organisations supporting these efforts are still positioned within the old colonial powers, maintaining the need for self-reflexive practices and considerations of wider political economy.
  • Resistance. The technologies of resistance have often emerged as a consequence of opposition to coloniality. A renewed critical practice can also ask the question of whether AI can itself be used as a decolonising tool, e.g. by exposing systematic biases and sites of redress. Furthermore, although AI systems are confined to a specific sociotechnical framing, Mohamed, Png, and Isaac believe that they can be used as a decolonising tool while avoiding a techno-solutionism trap.

Reciprocal engagements and reverse learning

Despite colonial power, the historical record shows that colonialism was never only an act of imposition. In a reversal of roles, the colonists often took lessons from the colonised, establishing a reverse learning. Reverse learning directly speaks to the philosophical questions of what constitutes knowledge. Deciding what counts as valid knowledge, what is included within a dataset and what is ignored and unquestioned is a form of power held by AI researchers that cannot be left unacknowledged. It is in confronting this condition that decolonial science, and particularly the tactic of reverse learning, makes its mark.

Mohamed, Png, and Isaac put forward three modes through which reciprocal learning can be enacted:

  • Dialogue. A decolonial shift that can be achieved by systems of meaningful intercultural dialogue. Such dialogue is core to the field of intercultural digital ethics, which asks questions of how technology can support society and culture, rather than becoming an instrument of cultural oppression and colonialism.
  • Documentation. New frameworks have been developed that make explicit the representations of knowledge assumed within a dataset and within deployed AI systems.
  • Design. There is now also a growing understanding of approaches for meaningful community-engaged research, using frameworks like the IEEE Ethically Aligned Design, technology policy design frameworks like Diverse Voices, and mechanisms for the co-development of algorithmic accountability through participatory action research. The framework of citizens’ juries have also been used to gain insight into the general public’s understanding of the role and impact of AI.

A critical viewpoint may not have been the driver of these solutions, and these proposals are themselves subject to limitations and critique, but through an ongoing process of criticism and research, they can lead to powerful mechanisms for reverse learning in AI design and deployment.

Renewed affective and political communities

How we build a critical practice of AI depends on the strength of political communities to shape the ways they will use AI, their inclusion and ownership of advanced technologies, and the mechanisms in place to contest, redress, and reverse technological interventions. The decolonial imperative asks for a move away from attitudes of technological benevolence and paternalism. The challenge lies in how new types of political community can be created that are able to reform systems of hierarchy, knowledge, technology, and culture at play in modern life.

One tactic lies in embedding the tools of decolonial thought within AI design and research. Contrapuntal analysis is one important critical tool that actively leads to exposing the habits and codifications that embed questionable binarisms in research and products. Another tactic lies in support of grassroots organisations and in their ability to create new forms of affective community, elevate intercultural dialogue, and demonstrate the forms of solidarity and alternative community that are already possible. Many such groups already exist, particularly in the field of AI, such as Data for Black Lives, the Deep Learning Indaba, Black in AI, and Queer in AI, and are active across the world.

The advantage of historical hindsight means that the principles of living that were previously made incompatible with life by colonial binaries can now be recovered. Friendship quickly emerges as a lost trope of anticolonial thought. This is a political friendship that has been expanded in many forms: in the politics of friendship and as affective communities in which developers and users seek alliances and connection outside possessive forms of belonging.

Finally, these views of AI taken together lead quickly towards fundamental philosophical questions of what it is to be human – how we relate and live with each other in spaces that are both physical and digital, how we navigate difference and transcultural ethics, how we reposition the roles of culture and power at work in daily life – and how the answers to these questions are reflected in the AI systems we build.

What does this mean for knowledge management?

Decolonial AI needs to be considered as an important aspect of the decolonisation of knowledge and knowledge management (KM). Further, some of the tactics for a decolonial AI put forward above have the potential to be applied not just to AI, but in the broader decolonisation of knowledge context, and this should also be considered,

Article source: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence, CC BY 4.0.

Header image source: OpenClipart-Vectors on Pixabay, Public Domain.

References:

  1. Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659-684.
Rate this post

Also published on Medium.

Bruce Boyes

Bruce Boyes (www.bruceboyes.info) is a knowledge management (KM), environmental management, and education professional with over 30 years of experience in Australia and China. His work has received high-level acclaim and been recognised through a number of significant awards. He is currently a PhD candidate in the Knowledge, Technology and Innovation Group at Wageningen University and Research, and holds a Master of Environmental Management with Distinction. He is also the editor, lead writer, and a director of the award-winning RealKM Magazine (www.realkm.com), and teaches in the Beijing Foreign Studies University (BFSU) Certified High-school Program (CHP).

Related Articles

Back to top button