29 Apr 2026 The hidden cost of AI: Digital colonialism and the Global South
A recent WACC Europe discussion featured a presentation by journalist Ingo Dachwitz on the largely invisible workforce, concentrated in the Global South, enabling artificial intelligence and social media feeds.
The myth of artificial intelligence
When Mark Zuckerberg appeared before the US Congress in 2018, facing questions about hate speech and disinformation on Facebook, his answer was consistent: AI will fix this. Five to ten years, he promised. That deadline has now nearly passed, and Ingo Dachwitz, a Berlin-based journalist and co-author of the 2025 book Digitaler Kolonialismus (Digital Colonialism), opened his presentation at the WACC Europe discussion in March by inviting participants to draw their own conclusions.
Dachwitz argued that the AI industry is built on a carefully maintained myth – that its products are a kind of technological magic. The reality, he said, is far more human and far less glamorous. Citing Australian AI critic Kate Crawford, he noted that AI is neither artificial nor intelligent. It depends, at every stage of its production chain, on enormous quantities of human labour.
At the core of this labour is data annotation: the manual, painstaking process of labelling images, text, and video so that machine learning algorithms know what they are seeing. Training the self-driving car systems of major automotive companies requires people to sit at screens drawing precise bounding boxes around pedestrians, cyclists, and traffic signs – thousands of times a day.
The development of ChatGPT required workers to read and label deeply disturbing content – violent, sexual, and otherwise harmful material – so that the model could learn what not to say. As Dachwitz put it: ChatGPT is successful in large part because it knows what it ought not to say. That knowledge was purchased at a heavy psychological cost to workers who remain almost entirely invisible.
Amazon’s experiment with cashier-free retail provided another telling example. The company marketed its “Just Walk Out” technology as shoppers being able simply to leave the store and AI would charge their account automatically. Behind the scenes, however, approximately 1,000 workers in India were manually reviewing footage and correcting errors the system could not handle. Amazon has since quietly discontinued the service.
Exploitation as a service
Dachwitz described the industry supplying this labour – business process outsourcing (BPO) firms and microwork platforms – as operating a system he called exploitation-as-a-service.
The World Bank estimates that between 154 million and 435 million people worldwide perform this kind of hidden digital labour. The big tech companies and car manufacturers who commission the work typically refuse to disclose how many ghost workers sit behind their products – because, as Dachwitz observed, to do so would shatter the myth of AI.
The working conditions these people endure are severe. An International Labour Organization study found that AI platform workers in Kenya earn an average of 89 cents per hour. Contracts run for as little as one month – or exist only for the duration of a single task. Workers have no health insurance, no pension provision, and no job security.
Those who annotate traumatic content receive little or no psychological support: William, a team leader who annotated sexual content for OpenAI in Nairobi (whose name has been changed), described being offered group counselling sessions in which 30 people sat in a room while a counsellor asked “How is work going?” – a far cry from the individual support his team needed.
Misleading job advertisements, relentless performance monitoring, non-disclosure agreements, and the systematic suppression of trade union organising complete the picture.
Young and vulnerable Africans like me have worked to ensure that everyone is safe when using ChatGPT. The reward we got is ridiculous, peanuts! While companies earn billions from our work. We know what that's called. We know it from our history. It's colonialism, digital colonialism. — William, data annotator, Nairobi (name changed)
Why “colonialism”?
The term digital colonialism is not Dachwitz’s invention. It originates, he was at pains to emphasise, in the Global South, from thinkers like Renata Avila, who wrote the afterword to his book and has been making this argument for over a decade. Dachwitz’s project together with his co-author Sven Hilbig, in part, has been to bring that critique to German-speaking audiences.
The parallel he draws is with the extractivist model at the heart of European colonialism: raw materials and labour were extracted from colonised territories, economic development in those territories was oriented entirely towards the interests of the metropole, and colonial powers frequently left behind states that were not economically viable on their own.
Digital colonialism, he argued, does not represent a new form of colonialism. It is a continuation and reinforcement of an existing world order – one in which the Global North accrues profit and power at the expense of the Global South.
The numbers support the analysis. In Kenya, 75% of AI platform workers hold university degrees or are in the process of completing them; in India, the figure is 96%. Highly qualified people are being absorbed into low-paid, precarious work that benefits corporations headquartered thousands of miles away, contributing little to tax revenues or GDP in the countries where the labour takes place.
The vast majority of venture capital funding the digital economy remains concentrated in North America. The ecological costs of the industry – from cobalt mining in the Democratic Republic of Congo to the mountains of electronic waste exported from wealthy nations – are borne disproportionately by communities in the Global South.
Platforms, polarisation, and the information crisis
The discussion extended beyond the labour question to the role of social media platforms in shaping – and distorting – public information. One participant in the group raised the challenge of navigating the torrent of AI-generated content: how, she asked, do we protect our societies from a flood of information in which truth and fabrication are increasingly hard to distinguish?
Dachwitz pointed to the structural logic of platforms like Meta: their algorithms are designed to maximise attention and interaction, not to serve the public interest.
Documents disclosed in 2021 by whistleblower Frances Haugen revealed that Meta’s systems weighted an angry-emoji reaction five times more highly than a simple thumbs-up because anger drives engagement. Viral posts, the same documents showed, contain four times more false information than normal ones.
Nobel Peace Prize laureate and journalist Maria Ressa has described Meta as biased against facts – its very architecture rewards the sensational and the divisive.
Jeremy Niyiguha, a journalist from Rwanda, offered three responses to the integrity challenge: editorial control (no copy-pasting from AI into publication), transparency with audiences about when AI has been used, and sustained investment in media literacy. He also stressed that AI reflects human input – it does not create truth – which makes the diversity of the voices shaping it a matter of urgent concern.
Resistance, alternatives, and the road forward
Dachwitz was clear that his analysis of the problem is not the whole story. He ended his presentation on a note of deliberate hope, highlighting the organising that is already underway.
In Kenya, 184 content moderators have sued Meta, TikTok, and the outsourcing firm Sama over their working conditions. Workers have formed their own organisation – the Data Labelers Association – because existing unions did not represent their interests.
In 2023, content moderators in Berlin and Nairobi came together to publish a joint manifesto calling for decent working conditions for all platform workers worldwide, explicitly naming digital colonialism as the system they are challenging.
Participants in the WACC Europe discussion contributed further perspectives on resistance and alternatives.
Philip Lee, WACC General Secretary, noted a growing emphasis in the Global South on the appropriation rather than mere adoption of technology – Indigenous communities in Latin America building their own community radio stations and connectivity networks, becoming creators on their own terms rather than passive consumers of tools designed elsewhere. He argued that the discussion must challenge the assumption that there is only one technological future, imposed by Silicon Valley.
WACC Europe President Peter Reimann raised a pointed dilemma regarding minority and Indigenous languages: If communities choose not to contribute their language data to large language models, their cultures remain invisible in AI systems; if they do contribute, they risk having that data monetised by corporations with no accountability to them.
Dachwitz cited the data sovereignty movement of the Māori people in New Zealand as a model worth studying – an effort to develop community-controlled AI infrastructure that does not surrender cultural data to outside commercial interests.
Sara Speicher, WACC Deputy General Secretary, asked what practical levers exist for civil society organisations. Dachwitz pointed to legal accountability as one of the most promising: Activists are pushing for legislation that would hold tech companies responsible for the labour conditions in their outsourcing chains – the same principle that reformers have long sought to apply to companies whose supply chains rely on exploitative mining or manufacturing. The Kenyan court cases, whatever their outcome, are establishing important precedents.
Dachwitz also called for investment in genuine alternatives to corporate-controlled digital infrastructure: decentralised, non-commercial communication platforms such as Mastodon and the Fediverse.
Regulation alone – the approach that has characterised the European Union’s response in recent years – is not sufficient, he suggested. The task is to build a different kind of digital world altogether, one premised on a different set of values.
Image credit: Gloria Mendoza / Better Images of AI / CCBY-4.0
WACC Europe’s monthly discussions aim to bring a justice angle to communication and technology’s challenges and build networks of informed engagement. Contact WACC Europe for more information.
Sorry, the comment form is closed at this time.