Chat GPT and the future of African AI

With limited training data matching African cultural and economic realities, the output of ChatGPT could be skewed toward reinforcing Western cultural and ideological hegemony.

Opinion by

Image : Lionel Bonaventure /AFP

The recent launch of ChatGPT, a powerful chatbot developed by California-based OpenAI, has sparked concern about the under-representation of Africa in the field of artificial intelligence. ChatGPT is trained on huge datasets of human-written texts, so that it can give users instant answers to both serious questions and more frivolous inquiries, such as “can you write a report on chatbots in the style of Frantz Fanon? Or in the style of a Miriam Makeba song?”

But texts from developed countries are heavily over-represented in the training datasets, with only a small number coming from Africa. According to Mozilla’s Internet Health Report 2022, from 2015 to 2020, Egypt was the only African country whose datasets were used to evaluate the performance of such machine-learning models, with just 12 instances recorded.

This reflects a larger trend in machine-learning research. Sub-Saharan Africa accounts for just 1.06% of the world’s total AI journal publications. In contrast, East Asia and North America account for 42.87% and 22.70% respectively.

The under-representation of Africa has significant implications for the continent. With limited training data matching African cultural and economic realities, the output of ChatGPT is likely to be skewed toward reinforcing Western cultural and ideological hegemony – because that’s all it has learned.

The spectre of digital feudalism

Justin Arenstein, the CEO of Code for Africa, a network of civic tech and open data labs, noted when his company joined the coalition Partnership on AI that “context matters. Machine learning – often marketed as “artificial intelligence” or AI – is reshaping the way that authorities make decisions, impacting millions of lives. It also shapes how human rights defenders fight online hate and abuse, and even what is considered to be the truth. But, in Africa, many of these decisions are based on data and algorithms that have no relevance to our reality.”

With its bold promises to “bank the unbanked”, overhaul education, healthcare and agriculture, tech is taking an increasingly central role in the lives of millions of Africans. But what if the underlying technology reflects the dominance of the Western and Asian companies that created it, rather than reflecting the values, norms, and interests of the African countries where it is used?

Tech giant Nigeria, where startups attracted $1.2bn of funding in 2022, is believed to import approximately 90% of all software used in the country. But if Nigerian fintech, agritech, or edtech startups base their business models on AI tools conceived across the Atlantic, can Nigeria have any meaningful digital sovereignty?

As the Ethiopian cognitive scientist Abeba Birhane puts it: “The West’s algorithmic invasion simultaneously impoverishes development of local products while also leaving the continent dependent on its software and infrastructure.”

Moreover, the fact that just 39% of Africans have internet access, compared to 89% in North America, further exacerbates the problem, limiting their ability to contribute to the data that AI models rely on to learn. “The world’s offline populations are the disputed territory of tech empires because whoever gets them locked into their digital feudalism holds the key to the future,” said Guatemalan technology activist Renata Ávila Pinto in a paper released last year.

Working in damaging corners of the internet

Another ominous sign for the continent is the reported way in which ChatGPT’s parent organisation, OpenAI, used low-paid data-labelling workers in Kenya for traumatic tasks.

An essential part of the AI industry consists of workers – mostly in low-waged countries – who assess internet content for violence, hate speech, and sexual abuse, so that tools can learn to detect toxic content and remove it from the platform. In short, AI needs humans to tell it what’s inherently wrong to say or show.

Until February 2022, ChatGPT enlisted Sama – a San Francisco-based firm which has already worked with Facebook’s parent company Meta. Labourers were sent snippets of disturbing text for labelling and were subject to graphic sexual content – to clean the platform of violence and hate speech, for salaries of less than $2 an hour, according to reports.

Sama said its workers can take advantage of individual and group therapy sessions with “professionally-trained and licensed mental health therapists”.

The firm first came under the spotlight when South African whistle-blower and former Sama employee Daniel Motaung revealed the traumatic nature of his work. Employees at Sama, he said, are asked to watch disturbing content in graphic detail with limited psychological assistance from the employer, leading to mental health problems.

The need for critical voices

Low wages and weak labour laws in some countries make the continent an attractive place for foreign tech companies as they see an opportunity to give cumbersome tasks at a cheap price. So while African employees may have no meaningful role in creating content, they may be enlisted to sift through the toxic waste of the internet on low wages to moderate content for AI tools.

As AI receives attention from investors across the world, outsourcing firms are expected to flourish on the continent in the next few years. The challenge for countries will be to enforce labour protection while continuing to attract the capital of Big Tech.

In reality, AI goes way beyond ChatGPT. But the recent waves made by the programme remind us how powerful this technology can be. There is a need to raise more critical voices and question the impact of Silicon Valley companies’ monopoly in vulnerable countries.

Much of AI’s potential has not been yet exploited. One day AI-powered solutions may indeed make farming and healthcare systems more efficient, helping lift millions of people out of poverty. But the technology should not be blindly trusted, and its harmful consequences must be assessed.

Leo Komminoth is a staff journalist at African Business magazine.

Want to continue reading? Subscribe today.

You've read all your free articles for this month! Subscribe now to enjoy full access to our content.

Digital Monthly

£8.00 / month

Receive full unlimited access to our articles, opinions, podcasts and more.

Digital Yearly

£70.00 / year

Our best value offer - save £26 and gain access to all of our digital content for an entire year!