International Finance
Banking and FinanceMagazine

AI drives change in global markets

Global markets
Machines can execute orders in microseconds and monitor markets around the clock, far faster than any trading floor

Artificial intelligence (AI) is reshaping how financial markets operate. What once was all about human traders shouting orders on crowded floors has become an arena dominated by computer algorithms.

Starting with early rule-based programmatic trading in the 1970s and 1980s, finance firms have long applied statistics and computing to markets. In the 1990s and 2000s, machine learning and neural networks added sophistication.

For example, hedge funds like Renaissance Technologies hired PhDs to use AI for pattern recognition. Today, we stand at a new inflexion point with generative AI and large language models that can process massive streams of text and data and even suggest novel trading ideas. As one Wharton finance expert notes, AI’s evolution “from algorithmic trading to personalised advice” has made finance “fertile ground for AI innovation.”

Applications of AI in finance

AI is now embedded in many financial processes. Broadly, AI serves in trading, analysis, and operations. In trading, automated systems place orders faster than any human can. High-frequency trading algorithms, often powered by machine learning (ML), make thousands of small trades every second to exploit tiny price discrepancies. Many of the largest trading venues are dominated by such “automated trading” in highly liquid assets. In other domains, AI systems read and summarise information.

For example, NLP tools scan newsfeeds and social media to gauge market sentiment, a process known as sentiment analysis. A sudden burst of negative tweets about a company might trigger selling by algorithms. In risk modelling and compliance, AI churns through vast data to calculate creditworthiness or portfolio risk in real time.

Advisors and insurers use AI to predict defaults or claims, while banks deploy chatbots to handle customer queries. In short, AI touches everything from trade execution to loan approvals and is effectively “democratising” access to analytics that only big institutions once had.

The influence of AI and algorithms is clearest in a few headline-grabbing episodes. In January 2021, the GameStop saga showed the power of social sentiment and automated strategies. A surge of retail traders on Reddit’s WallStreetBets sent the share price of the video-game retailer GME skyrocketing over several days.

Hedge funds that had short positions in the stock rushed to close them. Eventually, trading apps temporarily halted trading, igniting a political firestorm. Researchers note that “retail investors using the Robinhood platform” collectively drove the sharp price swing. Although that episode was driven by human coordination online, it attracted algorithmic responses, with some trading bots detecting the rapid price trend and either piling in or pulling out, amplifying volatility.

AI-driven trading has also featured in the activity of quantitative hedge funds. Firms like Renaissance Technologies, Two Sigma, DE Shaw, and others have long used machine learning to devise strategies. A 2019 survey identified those four as pioneers in AI-driven investing. These firms process vast alternative datasets, from satellite imagery of retail parking lots to aggregated price patterns, looking for subtle predictive signals.

For example, AI can spot that a retail chain’s lawns are greener or read thousands of local news sites to update earnings estimates. In late 2022, Reuters reported Renaissance’s quant funds using models to target returns. Although strategies are secretive, experts agree that AI “provides a competitive advantage” in systematic trading.

AI and social media can also combine in troubling ways. Studies and news accounts warn of sentiment manipulation using bots. In a recent report, experts imagined hundreds of AI-generated social media profiles pushing a narrative about a stock. Real people reacting to the buzz drive the price up or down, while those who detect the narrative profit.

The danger is that neither the promoter nor some of the manipulators even realise they’re part of a larger AI-driven scheme, making enforcement hard. In practice, regulators have seen smaller-scale attempts in crypto and DeFi, where “malicious actors…deploy AI bots” on platforms like Telegram to hype assets.
These examples highlight how automated sentiment analysis and engagement can influence markets, sometimes legitimately, with bots surfacing true trends and at other times through coordinated pumping.

Speed, scale and smarter markets

The attraction of AI in finance is clear, as it does things humans cannot. Speed and automation are paramount. Machines can execute orders in microseconds and monitor markets around the clock, far faster than any trading floor. This rapid processing tightens bid-ask spreads and improves liquidity in normal times.

As the IMF notes, technology has “improved price discovery, deepened markets, and often dampened volatility” in normal periods. AI also excels at scalability and data processing. Financial markets generate enormous volumes of data on prices, news, social posts, filings, and satellite images, and AI can sift through it all.

Advanced neural networks and LLMs (Large Language Models) can turn unstructured text into structured signals. For instance, a generative model can instantly read a regulatory filing or earnings call transcript, flagging risks or opportunities. The IMF notes that generative AI lets investors “process very large amounts of unstructured, often text-based, data,” which can improve forecasts and price accuracy.

Another benefit is pattern recognition and precision. AI algorithms can spot complex statistical patterns that humans cannot see, such as nonlinear relationships or high-dimensional correlations.

In portfolio management, for example, deep-learning models and reinforcement learning (RL) can adapt trading rules over time. Quantitative analysts now use RL to optimise asset allocation dynamically, a method well-suited for constantly shifting markets.

These models “identify complex patterns in large datasets” by using millions of parameterised rules, going far beyond traditional formulae. In effect, AI can tailor strategies to ever-changing conditions, learning minute details of market microstructure.

This leads to efficiency and consistency, and routine tasks like compliance checks or customer service get automated via RegTech tools and chatbots, freeing humans for higher-level thinking. In trading, even a tiny improvement can be valuable. A recent AI pilot by HSBC reportedly found a quantum-enhanced model that improved trade-fill predictions by 34% over classical methods.

Finally, AI can open new markets and lower costs. According to the IMF, AI tools are reducing barriers to entry and making it feasible for smaller firms or even individuals to analyse less-liquid markets like emerging debt or certain commodities. By automating research, coding, and data gathering, generative AI might lower the expertise needed to trade exotic assets.

In retail finance, AI-powered robo-advisors have democratised wealth management. One report notes that about half of retail investors say they would use ChatGPT or similar AI to choose or rebalance investments.

This suggests AI is making advanced analysis available to “anyone,” not just Wall Street. Overall, proponents argue these gains, faster reactions to news, more thorough analysis, and automation, should make markets more efficient and investors more informed.

Herding, black boxes and volatility

AI in finance may sound like an interesting and exciting concept, but it is not risk-free. A key concern is model correlation or “monoculture.” When many firms use similar data and algorithms, their trades tend to move together. Regulators and economists warn that this can amplify swings.

For example, if numerous deep-learning models all see a similar signal, they might simultaneously sell stocks, creating a cascade. The Bank of England and the SEC have warned that advanced AI’s “hyper-dimensionality” and shared data sources could lead to just a few dominant models or data providers. In practical terms, a “monoculture” of strategies can increase market correlations and herding. In stressed markets, this may cause liquidity to evaporate suddenly.

A recent IMF analysis noted that many algorithmic funds include safety mechanisms that can all activate at once, causing feedback loops. The 2010 “Flash Crash” is a cautionary example of an automated sell order in one market leading to a chain reaction, briefly knocking 1,000 points off the Dow within minutes.

Though that crash predated today’s AI, it illustrates the danger of automated systems acting in unison. Experts now worry AI-driven trading could produce even faster and larger moves.

Closely related is model opacity and explainability. Modern AI models are often “black boxes” that even their designers cannot fully explain how a specific trading decision was reached. This poses problems for oversight. If an AI fund suddenly accumulates a large position in an obscure asset, regulators might not understand why.

The IMF notes that market participants insist on human oversight and explainable strategies, avoiding purely “black box” approaches. Likewise, a recent Sidley (law firm) report warns that deep-learning and reinforcement-learning systems can have “emergent behaviour” that current market rules aren’t built to catch.

For example, if an AI learnt to detect fraud or manipulate prices in some non-obvious way, standard surveillance systems might miss it. The opacity also raises ethical concerns. How do we verify that AI decisions are fair and unbiased? Finance is littered with historical biases, so an AI trained on past records might perpetuate discrimination. Wharton researchers point out that “bias in AI models is particularly pertinent” in finance, especially lending and insurance.

There are also privacy and manipulation issues. Bad actors can use AI to tailor scams or spread disinformation. SEC Chair Gary Gensler warns that AI-driven narrowcasting can facilitate fraud by zeroing in on individuals’ vulnerabilities. Indeed, regulators have already flagged concerns about AI-generated “deep fakes” of company announcements or rumours that could jolt markets.

Finally, there is the risk of systemic volatility. Many worry that AI might make crises worse by speeding up decision-making. In turbulence, when computers pile into or out of trades in milliseconds, prices can swing violently.

The Sidley report cites the IMF in noting that many AI strategies include circuit-breaker logic that all trigger together under unprecedented moves, risking a sudden freeze of liquidity. In other words, while AI may “damp down” routine volatility by making markets more efficient, it might also set the stage for faster, sharper shocks. Small errors or adversarial attacks on widely used models could propagate quickly across markets. There’s also a concentration risk, and just a few tech firms provide the most advanced AI services and cloud infrastructure, so outages or cyberattacks could disrupt financial systems more broadly.

Governance meets technology

Awareness of these issues is growing. Governments and regulators worldwide are moving to govern AI in finance. In the EU, for example, the new AI Act will classify many financial AI systems as “high-risk” and impose strict obligations.

Practices like AI-based credit scoring or risk pricing will have to meet transparency, data quality, and audit requirements. The stated goal is “consistency and equal treatment in the financial sector.”

Financial institutions are also starting to set their own AI governance. Many banks now require human sign-off on automated strategies. Investment funds maintain “model risk management” teams to test how strategies behave under stress. After the GameStop episode, social platforms began cracking down on stock-promo groups. And financial regulators update rules in light of faster trading speeds.

Still, experts say more will be needed. For example, regulators worry about a lack of transparency when nonbanks use cutting-edge AI outside full supervision. There are calls for international coordination, like the Financial Stability Board surveying AI preparedness in different countries.

Another trend on the horizon is quantum computing. While today’s AI uses classical computers, quantum machines promise even more power. If scalable quantum computers arrive, they could revolutionise optimisation and simulation problems in finance.

Banks are already experimenting. In 2025, HSBC announced a pilot with IBM showing that a quantum algorithm could predict bond trade outcomes 34% better than classical methods.

UBS, Citigroup, and others are researching quantum for portfolio optimisation and risk analysis, and analysts estimate the “quantum technology” market could reach $100 billion by 2030.

In plain terms, quantum computing could solve certain portfolio or pricing problems much faster than today’s fastest supercomputers. However, practical quantum advantage remains in early stages, and much of that promise is years away. Even so, finance leaders like HSBC’s quantum head call this a “new frontier” in computing for markets.

Tale of two traders

The AI wave affects big institutions and small investors differently. Large financial firms such as banks, hedge funds, and trading firms have the resources to develop sophisticated AI. They run vast data centres, hire machine-learning experts, and deploy cutting-edge models.

These institutional players have led the AI adoption for over a decade as they’ve used automated algorithms in HFT and complex derivatives trading. They also invest in AI for risk management and compliance. Because of their scale, they have an edge in computing speed and data access.

Retail investors have lagged but are catching up. The same chatbots and analysis tools that institutions use are now available to individuals in a lighter form. As one industry report noted, about half of retail investors say they would use AI tools to pick or adjust investments, and around 13% already do. User-friendly platforms now offer AI-driven advice and portfolio screening.

For example, retail-friendly robo-advisors automate investing for individuals with modest accounts. Even individual day traders are experimenting with off-the-shelf AI bots or sentiment-tracker apps. Indeed, the widespread curiosity about ChatGPT and AI has “democratised” access to analysis once reserved for big banks. One former UBS analyst remarked that using ChatGPT for stock research was akin to “replicating many workflows” of an expensive Bloomberg terminal.

Balancing innovation and stability

AI’s role in finance is growing fast. As the IMF puts it, generative AI is the “latest stop on a journey” where technology incrementally improves markets. Its benefits in faster processing, new insights from data, and lower costs have already transformed many aspects of trading and investment.

But the journey is not without bumps. Our analysis shows that there are real risks that correlate with AI models, as they could unintentionally synchronise market behaviour, create opaque algorithms, trigger flash crashes, and mislead investors.

Addressing these issues will require vigilance and innovation on their own part. Regulators are awakening to the challenge, calling for AI governance frameworks and updating rules for our faster, more complex markets.

Financial firms are instituting controls on things like explainability requirements and kill switches for trading bots. Meanwhile, new technologies on the horizon, like quantum computing, promise even more powerful tools.

In the end, the AI transformation in finance mirrors other revolutions by creating opportunities and pitfalls. The central question will be how these systems are deployed. Used wisely, they can make markets more efficient and accessible to more people. Used recklessly, they could amplify our worst crashes or widen inequalities.

For investors and policymakers alike, the task is to harness AI’s ingenuity while keeping our collective financial system resilient. Industry leaders must ensure AI markets remain “transparent, fair, and inclusive,” even as the algorithms get ever smarter.

What's New

Zillow rewrites the American Dream

IFM Correspondent

The Gulf’s new capital play

IFM Correspondent

The fight for creative rights

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.