Banks and fintechs drive surge in AI-approved loans

AI allows financial institutions to approve loans quicker than before, but some experts warn of deepening inequalities.

By

Image : LLUIS GENE /AFP

On the outskirts of Johannesburg, 45-year-old Grace Ndlovu used to watch her kiosk shelves sit half-empty during slow periods. With no formal credit history, banks wouldn’t lend to her. She had no choice but to wait – until last year, when she secured her first loan through MTN’s mobile money platform, MoMo, and fintech firm JUMO.

Qwikloan, their lending service powered by so-called artificial intelligence (AI), offers short-term credit ranging from R250 ($13) to R10,000 ($540), assessing applicants based on mobile money transactions, airtime purchases, and repayment patterns. Approval takes minutes.

“I never thought I’d qualify for a loan,” says Ndlovu. “When I need to stock up for busy weekends, I can now get money fast.”

Across Africa, AI-driven credit scoring is reshaping financial access. More than 350m adults remain locked out of formal banking, yet digital lenders are approving loans at an unprecedented scale.

This technology is creating opportunities – but also raising concerns. Borrowers often have little idea how their data is used, with credit models factoring in everything from phone usage patterns and personal messages to social media activity, drawing scrutiny from regulators.

Rapid approvals

Securing a loan once meant visiting a bank, filling out forms, and waiting weeks for an answer. Without collateral or a credit history, rejection was almost certain. Then came mobile money. Since M-Pesa’s launch in Kenya in 2007, mobile wallets have revolutionised transactions, allowing millions to send and receive money without a bank account. But as mobile money took hold, it also left behind data trails – recording how people topped up airtime, paid bills, and transferred money. Lenders began to take notice.

By 2012 Kenya’s M-Shwari pioneered mobile lending, using transaction histories and call records to assess borrowers. It was a breakthrough, but AI has since pushed the model further at firms like MTN. (Pictured: CEO Ralph Mupita)

“We’ve gone from banks processing a few hundred loans a day to digital lenders approving 100,000 in the same time,” says John Mark Ssebunnya, general manager for fintech architecture at MTN Group.

“Today, large language models (LLMs) can now be deployed to analyse more complex data sources and make decisions in seconds.” AI models now analyse SMS content for financial behaviour – messages discussing debts can contribute to a borrower’s profile. Even informal loans can leave digital traces.

Beyond text data, lenders are integrating social media insights, provided users consent.

“If permitted, lenders can extract valuable behavioural signals,” says Ssebunnya. Open banking is expanding the model further, allowing financial institutions to assess borrowers using bank records, utility payments, and even pay-TV subscriptions.

Last year, MTN’s lending operations issued between $1.4bn and $1.6bn in loans to over 5m people, says Ssebunnya. A decade ago, that level of financial inclusion was unthinkable.

For small businesses, these micro and nano-loans are lifelines. “A retailer operating a grocery stall with just $200 to $500 in stock might seem small-scale, but access to fast credit means they can restock regularly, keeping their business afloat,” Ssebunnya says.

Can AI loan money fairly?

Yet as AI-driven lending expands, so do the risks. The very systems designed to broaden financial inclusion could just as easily deepen inequality or exploit vulnerable borrowers.

“One of the biggest data privacy concerns is around the sources of data that AI-driven lending systems use for credit scoring,” says Nanjira Sambuli, a Kenyan tech policy researcher.

“Some – while guised as innovative – are concerning, such as use of social media profiles and online profiles of potential lenders to gauge creditworthiness. Another one is unsolicited prompts for ‘instant loans,’ many of which are developed based on trends observed around financial behaviour, and trading in datasets around use of platforms such as mobile money,” says Sambuli.

The dynamics around informed user consent – for transaction data or contact details to be shared – are especially warped in African markets, Sambuli adds, “and would require strict enforcement of data protection regulations to curb or disrupt the status quo”.

The implications are stark. Those without a digital footprint risk exclusion, while those with one may be bombarded with unsolicited loan offers, often fuelled by unseen algorithms tracking their financial habits.

Bias is another risk. AI models are only as fair as the data they are trained on. Without safeguards, they can reinforce existing inequalities, favouring certain borrowers over others.

Tausi Africa, a Tanzanian fintech firm behind the AI credit scoring platform Manka, has taken steps to address concerns around bias in lending.

“Our AI doesn’t use metadata like gender or race,” says Said Baadel, Manka’s director of research and development. “We focus on transactional patterns and affordability, removing elements that could introduce bias.”

Tausi Africa says that it has incorporated gender lens investing (GLI) frameworks and ethical AI principles into its model development. This approach aims to identify systemic barriers in financial access and refine algorithms to actively promote inclusion, particularly for women and youth – two groups historically underserved in credit markets.

Alternative data sources like utility payments and financial obligations are transforming credit assessments.

“A kiosk owner regularly paying electricity and water bills through mobile money demonstrates financial discipline and commitment to recurring obligations,” says Baadel. “These payments also help confirm proof of residence – whether a borrower owns or rents their home or business space – providing additional context for risk assessment.”

If this boosts loan approval rates and financial inclusion, it also introduces added risks. Unchecked AI models can trap borrowers in cycles of debt, warns Sambuli.

“Unexplainable, un-auditable AI-driven credit scoring could absolutely widen financial inequality, and even jeopardise financial health. If borrowers are algorithmically targeted to ‘keep borrowing from Peter to pay Paul,’ the system is failing them.”

Financial inclusion potential

M-KOPA, a company that provides digital financial services to underbanked consumers, says it uses Microsoft’s AI services to assess lending risks and improve financial forecasting. It processes over 500 payments per minute, enabling 3m people across Africa to access solar power systems, digital loans, health insurance, and smartphones. It says the AI-driven model has led to 440,000 additional credit lines for customers following successful repayment – and claims that predictive analytics can enhance financial access while managing risk.

Fraud detection is also a key AI capability. “Take Nigeria, with a population of over 200m,” says Ssebunnya. “Even if we focus only on mobile users, that’s 100m individuals generating transaction data. AI is the only way to detect anomalies at scale. For instance, if someone inserts ten different SIM cards into a phone in a month, AI can assess whether it signals fraud or a legitimate use case.”

Regulators left behind

Regulators are playing catch-up. Kenya’s central bank has introduced licensing for digital lenders, while Nigeria’s Federal Competition and Consumer Protection Commission has cracked down on predatory lending. Yet enforcement remains inconsistent, and many AI-driven credit models operate in regulatory grey areas.

“Accountability of the scoring systems must be a policy priority and regulatory requirement. This can be achieved through regular audits – both self-assessments by the companies and through availing datasets to researchers and civil society,” says Sambuli.

Stricter regulation of digital lending is taking hold in markets like Kenya and Nigeria, but regulatory bodies still struggle to keep pace with the rapid evolution of AI-driven credit systems. The real test lies in adapting policies to match technological advancements while ensuring consumer protection.

“The challenge ahead is striking a balance between innovation and accountability,” says Ssebunnya. “AI models are only as good as the data they are trained on. While AI has helped democratise access to credit, we can’t ignore the risks of bias. The key challenge is ensuring AI models do not reinforce existing inequalities.”

Want to continue reading? Subscribe today.

You've read all your free articles for this month! Subscribe now to enjoy full access to our content.

Digital Monthly

£8.00 / month

Receive full unlimited access to our articles, opinions, podcasts and more.

Digital Yearly

£70.00 / year

Our best value offer - save £26 and gain access to all of our digital content for an entire year!