International Finance
ExclusiveFeaturedFintech

Are humans making way for AI loan officers?

IFM_Loan
For borrowers, the shift may be invisible as applications are approved faster and rejections arrive more quickly, while what changes quietly beneath the surface is how those decisions are made

A loan once depended on a banker’s instinct. A handshake. A conversation. A sense, sometimes imperfect, sometimes deeply human, of whether someone could be trusted. Today, that decision may take seconds. And, it may not involve a human at all.

Across global banking systems, artificial intelligence is moving from back-office optimisation to the heart of credit decision-making. The shift is subtle. There are no public announcements declaring that machines now approve mortgages. Yet increasingly, algorithms analyze income, spending patterns, behavioural signals, and even alternative data before a human ever sees an application.

So, the question is unavoidable: Are machines deciding who gets loans? And if so, what happens to human judgement?

The Quiet Expansion Of AI In Lending

Artificial intelligence is already deeply embedded in financial services.

“AI is transforming banking quite significantly, and the pace of adoption is fast,” Wahyu Jatmiko, Assistant Professor in Banking and Finance at the University of Southampton Business School, told International Finance.

He points to data from the Bank of England and Financial Conduct Authority showing that around 75% of UK financial institutions were using AI by 2024, up from 58% just two years earlier.

However, he explains, the heavy use remains concentrated in internal optimisation, cybersecurity and fraud detection. In underwriting specifically, adoption is more measured.

“Roughly around 15% of firms use AI directly in credit underwriting,” he estimates.

That figure may sound modest. But the deeper shift is structural.

“Even where it is not fully taking over, AI is increasingly embedded in the process,” Jatmiko says.

Instead of relying entirely on traditional credit bureau scores, systems now analyse real-time transaction data, behavioural patterns, and alternative datasets.

The result is not necessarily that machines approve all loans. Underwriting is becoming faster, more data-driven, and far more granular.

James Ekpa, an AI researcher at the Blockchain Technology Association for Black & Minority Ethnic Engineers (AFBE-UK), did not mince words.

“Yes, machines are increasingly deciding who gets loans today,” he told International Finance.

He sees a transformation from slow, manual processes to rapid, automated systems powered by machine learning algorithms.

Speed, consistency and scalability are among AI’s biggest advantages. Decisions can be made quickly. Models apply the given criteria uniformly. And, algorithms can analyse thousands of variables at a scale humans simply cannot match.

Efficiency, in other words, is no longer the differentiator. It is the baseline expectation.

Enhancement Or Replacement?

But, does faster mean better? And more importantly, does faster mean human judgement is fading?

“At the moment, I clearly see AI as enhancing human judgement rather than replacing it,” Jatmiko says.

He cites UK data suggesting that while about 55% of AI applications involve some automated decision-making, only around 2% are fully autonomous.

Creditworthiness, he argues, is not merely about predicting default probabilities. It involves context, borrower circumstances, regulatory constraints, and sometimes ethical considerations.

AI excels at analysing large datasets, document reading, income verification, and affordability calculations. In that sense, Jatmiko says, it acts like a powerful analyst. But final approvals, particularly for complex or high-value loans, still rest with humans.

Ekpa agrees that the human role is evolving rather than disappearing. Loan officers today increasingly review edge cases and borderline applications. They handle complex deals. They explain decisions to customers. They monitor model outputs and escalate anomalies.

The job is changing. It is becoming supervisory. That may be the real transformation.

When Context Meets Code

The limits of automation become clearer when qualitative factors enter the picture.

Jatmiko describes a hypothetical but realistic scenario: a small business reporting temporary losses due to a supply chain shock while holding strong long-term contracts. A human underwriter may interpret the broader narrative and take a forward-looking view. An algorithm trained primarily on historical default data might simply detect recent losses and flag high risk.

“There is research showing a mismatch between what AI models consider important and what human loan officers see as meaningful indicators of creditworthiness,” he explains.

Humans can contextualise. They can sometimes account for structural disadvantages when justified by circumstances. AI, by design, optimises patterns found in historical data. That difference is subtle. But in lending, subtle differences affect livelihoods.

The Question Of Bias

Advocates of AI argue that machines eliminate prejudice. Algorithms do not discriminate intentionally. They do not favour friends. They apply rules consistently. And that consistency is powerful.

But, consistency applied to flawed historical data can create new problems.

“AI can reduce certain types of human bias, but it can also embed and even amplify systemic bias,” Jatmiko explains.

If past lending patterns reflected unequal treatment of certain demographic groups, models trained on that data may internalise those patterns as objective signals of risk.

Ekpa echoes this concern. One of the primary risks, he notes, is bias amplification. If historical data contains discrimination, models may encode and intensify it.

Transparency is another issue. Complex models can be difficult to explain. Borrowers denied credit may receive little more than a generic explanation.

“Opacity raises regulatory and consumer trust concerns,” Ekpa warns.

Then there is model drift, when changing economic conditions gradually degrade model performance. Without continuous monitoring, systems may misprice risk during volatile periods.

In short, bias does not disappear. It changes form.

Accountability In An Algorithmic Age

If an AI-driven system denies a borrower unfairly, who is responsible? The answer is not always clear. Multiple actors are involved – AI manufacturers, developers, third-party providers, and lenders themselves.

However, both experts converge on one principle: accountability ultimately rests with the financial institution.

Ekpa says AI is a tool, not a legal entity. Financial institutions remain responsible for the models they deploy, the data they use, and the governance frameworks they maintain.

Jatmiko does not overcomplicate it. If a loan decision turns out to be unfair, the algorithm cannot be the one blamed. The bank chose to use it, so the bank carries the responsibility. That means senior leaders cannot hide behind technical language. They have to stand behind the outcomes.

He stresses that human oversight is not optional, especially for complicated or sensitive cases. Models need regular checks. They need to be tested for bias. They need proper audit trails. Otherwise, problems build quietly.

He also worries about something bigger. If many banks start depending on the same AI providers, risk can pile up across the system. One flaw could affect more than just one institution. Efficiency is important. But, it cannot come before accountability.

The Islamic Finance Lens

From an Islamic finance point of view, this is not just a technical debate. It goes deeper than that. Islamic banking is guided by Maqasid al-Shariah, ideas around justice, fairness, and social welfare. Lending is not only about numbers on a balance sheet. It carries a social responsibility.

Yes, AI can make processes smoother. Faster approvals. Cleaner risk models. That part is clear. But Jatmiko flags something more subtle. If the data used to train these systems reflects a past where small businesses were routinely sidelined, the algorithm may quietly repeat that history.

And if that happens, the technology could end up working against the very goals Islamic finance is supposed to protect. Not intentionally, but just by following patterns.

Aligning AI with ethical principles requires intentional intervention in model design and governance. It may require bringing social scientists into AI development processes. It may demand stronger oversight, particularly when tools are sourced from third-party providers.

Technology alone does not guarantee fairness. Design choices matter.

Possibility Of A Hybrid Future

So, where is banking headed? Fully automated lending systems may emerge in low-risk, low-value segments. Routine cases can be processed at speed and scale. But both experts see the broader future as hybrid.

Ekpa believes competitive advantage will come from institutions that combine AI’s analytical power with human judgement, rather than eliminating one in favour of the other.

Jatmiko similarly expects automation to expand, but insists that human supervision will remain essential, especially for complex or high-impact decisions.

Human-in-the-loop processes are already becoming common. Algorithms analyse. Humans validate. Decisions are checked before final approval. Perhaps, the future banker will not be replaced, but repositioned.

The Big Question

For borrowers, the shift may be invisible. Applications are approved faster. Rejections arrive more quickly, too. What changes quietly, beneath the surface, is how those decisions are made.

Is human judgement fading? Or simply moving further upstream, designing and supervising the systems that now perform the analysis?

The rise of the AI loan officer is not dramatic. No headlines are announcing the end of human bankers. Instead, there is gradual integration, more data, faster models, and shorter decision times.

Machines are increasingly involved. That much is clear. But whether they ultimately decide, or merely assist, depends less on technological capability and more on governance choices.

Banks can treat AI as an efficiency engine. Or, they can treat it as a tool that augments, rather than overrides, human responsibility. The distinction may determine not only how loans are approved, but how trust in the financial system evolves in the years ahead. And trust, unlike data, cannot be automated.

What's New

Adecco Group to scale agentic AI through Agentforce agreement

IFM Correspondent

Big tech’s silicon shift: Designing its own AI chips

IFM Correspondent

Business Leader of the Week: CEO Andrew Ettinger to reinvent Hume AI

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.