International Finance
MagazineTechnology

EU AI Act: A struggle to keep up with tech

EU AI Act
While the EU has implemented the AI Act to set global standards, its broad and stringent regulations have raised concerns among startups and investors

Regulators worldwide face a rapidly growing technology with enormous economic and geopolitical effects. European Union (EU) negotiations often result in deals made after midnight due to fatigue and horse-trading. The one the European Council and EU Parliament agreed upon on December 8–9, 2023, was similar.

Its outcome, the EU AI Act, is the first major piece of legislation controlling AI, including ‘generative AI’ chatbots, which have become the Internet’s new craze since ChatGPT’s inception in late 2022.

Two days later, French startup Mistral AI unveiled Mixtral 8x7B, a new large language model (LLM) for generative AI. Its unique setup of eight expert models makes it better than proprietary alternatives despite being smaller. Worse, the Act’s harsher regulations do not apply to its open-source code, presenting regulators with new issues.

Mixtral’s disruptive potential exemplifies policymakers’ struggles to rein in AI. Tech companies believe self-regulation is the answer. Given their inclination to prematurely enforce restrictive laws, former Google CEO Eric Schmidt believes governments should leave AI regulation to tech corporations.

How to control something that changes so fast is a question for most policymakers.

Setting EU law

The first attempt to answer that question is the AI Act, which will take effect in May 2025. Given the bloc’s regulatory powerhouse status, it intends to develop a European and possibly worldwide regulatory framework by encompassing practically all AI applications.

RPC partner Helen Armstrong said, “Large, multi-jurisdictional businesses may find it more efficient to comply with EU standards across their global operations on the assumption that they will probably substantially meet other countries’ standards.”

It is also the first attempt to handle foundation models, or General Purpose AI models (GPAI), which power AI systems.

All models must be horizontally compliant, including detecting AI-generated content, or face fines of up to 7% of the miscreant’s global revenue. How do you control rapid change? The Act tiers risk and responsibility for activities and AI models.

GPAIs with systemic risk must undergo rigorous reviews, incident reporting, and advanced cybersecurity procedures, including ‘red teaming,’ a simulated hacker attack. It’s called “systemic risk” because of two main factors: the amount of computation used to train the model (more than 10^25 “floating point operations”), which shows how big the industry is; and the model having more than 10,000 business users in the EU.

It appears only ChatGPT-4 and probably Google’s Gemini fit these criteria. Not everyone finds these criteria effective. Nigel Cannings, the founder of Intelligent Voice, stated that some high-capacity models may be relatively benign, while others may use lower-capacity models in high-risk contexts. The computing criterion may encourage developers to find workarounds that technically meet the threshold without reducing risks.

Cutting data requirements to achieve favourable outcomes is the goal of current AI research. Patrick Bangert, a data and AI expert at Searce, a technology consulting firm, said, “Classifying models by the amount of compute they require is only a short-term solution. These efforts are likely to break the compute barrier in the medium term, thus making this regulation void.”

The Act’s final draft was fiercely negotiated. France, Germany, and Italy initially resisted binding foundation model legislation, fearing it would hurt their startups. The Commission proposed horizontal regulations for all models and codes of practice for the most powerful as a compromise.

“There was a feeling that a lower threshold could hinder foundation model development by European companies, which were training smaller models at the time,” said Philipp Hacker, an AI regulation expert at the European New School of Digital Studies.

Hacker argued that this was entirely incorrect, as the rules only codify the bare minimum of industry practices—even falling short by some measures. Domino Data Lab AI expert Kjell Carlsson said, “We chose the threshold after extensive lobbying, resulting in an imperfect outcome. Others think the Act is too broad. It’s far more effective to regulate use cases instead of the general technologies that underpin them.”

Many European startups and SMEs say the limitations could hurt them compared to competitors.

The Future Society, an AI governance think tank, discovered that foundation model suppliers that invest much in training data—1% of their development costs—find compliance easier. Sceptics argue that this solution serves as an additional barrier to the EU’s regulatory framework, hindering innovation in an area where Europe desperately needs success stories.

Compared to the United States and China, the EU has created few AI unicorns and lagged in research. Nicolai Tangen, head of Norway’s $1.6 trillion sovereign wealth fund, which uses AI in its investment decision-making, has publicly criticised the EU’s approach: “He said, “I am not saying it is good, but in America, you have a lot of AI and no regulation; in Europe, you have no AI and a lot of regulation.”

European firms face a fragmented market, stricter data protection regulations, and difficulty retaining AI professionals. Hacker says the Act’s unjustified “bad reputation” may make things worse.

“It is not particularly stringent, but there has been a lot of negative coverage, and many investors, especially from the international venture capital (VC) scene, treat the Act as an additional risk. This will hinder European unicorns’ fundraising,” he said.

Some disagree with this assessment. The Act’s rules require VCs to add a new criterion to their scorecard: Is the company building a model or product that is and will remain EU compliant?

Dan Shellard, partner at Paris-based venture finance firm Breega, said regulation might offer regtech opportunities. Some believe it will boost innovation.

Chris Pedder, Chief Data Scientist at AI-powered edtech firm Obrizum, said forcing corporations to be more open and responsible will certainly spur innovation.

The special installation of fans’ recreations of Johannes Vermeer’s ‘Girl with a Pearl Earring’ includes Julian van Dieken’s AI-powered piece. Another issue is that technology is evolving faster than legislation. The Act doesn’t regulate open-source models like Mixtral 8x7B unless they pose a systemic risk. Making them public aims to increase transparency and accessibility, but it also poses significant safety risks.

Open-source models offer a broader range of computational capabilities, allowing many users to utilise local computing resources instead of expensive cloud-based ones.

Iain Swaine of BioCatch, a digital fraud detection startup, noted that in a decentralised system, it becomes easier to create malware, phishing sites, and deepfakes.

America is divided

The United States is behind in regulation despite its commercial AI dominance. Multiple federal agencies regulate AI, creating a fragmented regulatory framework. An executive order requires federal agencies to investigate AI usage, require AI system developers to assure ‘safe, secure, and trustworthy’ systems and share safety test results with the US government.

Donald Trump has vowed to reverse it, but it may fail without Republican support in Congress. Congress’ bipartisan AI task force has yielded little. Due to partisanship, any compromise before the November elections is improbable. Since American governments value innovation and economic progress, we project US regulation to be less severe than European regulation. Europe has no AI and lots of regulation, while America has lots of AI and little regulation.

Morgan, Lewis & Bockius partner David Plotinsky said, “AI will be an area in which both Congress and the executive branch take a very incremental approach to regulating AI—including by first applying existing regulatory frameworks to AI rather than developing entirely new frameworks.”

States could potentially fill this void. He said the risk is a “patchwork of regulations that may overlap in some areas and also conflict in others.” Apocalyptic predictions that an omnipotent AI may threaten humanity inform the debate. Some, like Elon Musk, want AI development stopped. However, mundane matters seem more urgent. The advent of monopolies, especially in generative AI, is a serious concern, but multiple ChatGPT competitors have allayed concerns that OpenAI, the business behind ChatGPT, will monopolise.

‘Our Planet Powered by AI’ author Mark Minevich said, “The industry’s high barriers to entry, such as the need for enormous data and computational power, mean that only a few huge incumbents, such as top big tech companies, could dominate.”

As AI becomes a flashpoint in the United States-China relationship, policymakers are also concerned about how legislation affects US competitiveness. In another executive order, US President Joe Biden ordered the Treasury to prohibit outbound AI investment in countries of concern and to review AI technologies for security vulnerabilities.

Plotinsky, acting chief of the US Department of Justice’s Foreign Investment Review Section, predicted that Washington would need to adopt a risk-based approach to foundation models. He also said that any risk-based approach would have to consider whether the foundation model was created in the United States or another trusted country, as well as what controls and other safety measures might be needed to keep China’s potentially powerful goals from causing concerns. National AI leadership is the government’s goal for 2030, with substantial funding.

China produces most AI research

Its Global AI Governance Initiative, which includes creating a new international AI governance organisation, shows its desire to influence global regulation. The initiative also urges “opposing drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI,” a reference to US legislation restricting US investment in China’s AI business.

According to Wendy Chang, a technology analyst at the Mercator Institute for China Studies, China aspires to participate in international forums and influence the global development of AI regulation.

Domestically, Beijing’s tightly managed censorship regime needs to be maintained, often openly, by requiring generated text content to ‘reflect communist basic values.’ The EU launched a global AI standards race. Although the government encourages Chinese enterprises to build Gen AI tools to compete internationally, these beliefs may hinder China’s AI leadership. Baidu and Alibaba unveiled their AI-powered chatbots last year.

The country’s early generative AI standards required developers to verify the ‘truth, correctness, objectivity, and diversity’ of training data, a high standard for models trained on online content. Recent regulatory changes allow Chinese enterprises to ‘elevate the quality’ and ‘strengthen truthfulness’ instead of ensuring training data honesty, but hurdles remain.

One working group suggested a proportion of model-rejectable answers. Given chatbots’ potential to spread falsehoods, such regulations may require Chinese corporations to develop their models with restricted firewalled data. Chinese companies and citizens cannot use ChatGPT. After an AI tool criticised Mao Zedong, iFlytek’s founder apologised publicly. Chang said Beijing’s domestic information regulation is a major issue for AI developers.

Compliance would be difficult for tech companies, especially smaller ones, and may deter many from entering the field. We already see tech companies focusing on corporate solutions rather than public products, which the government desires.

A full AI law is due from the Chinese government, which has published specific AI regulations. The 2021 recommendation algorithm regulation was driven by concerns over their role in information dissemination, a perceived threat to political stability, and China’s concept of ‘cyber sovereignty.’ Importantly, the rule created a list of algorithms with “public opinion properties,” which means developers had to explain how their algorithms were trained and how they were used. It now covers AI models and training data, with the first LLMs passing these reviews released in August.

China’s internet authority recently issued guidelines for AI-produced deepfakes, and its deep synthesis regulation, finalised five days before ChatGPT’s debut, requires synthetically generated content to be labelled. Who owns this photo? Another rising battleground is foundation model data IP ownership.

Generative AI has stunned creative workers, prompting legal action and strikes in industries like Hollywood that were previously impervious to technological innovation. Many artists have sued generative AI platforms for creating unlicensed derivative works.

Stock image seller Getty Images sued image generation platform Stable Diffusion for copyright and trademark infringement. Financial authorities face new AI problems. Risk modelling, claims management, anti-money laundering, and fraud detection in finance increasingly use AI, posing serious hazards.

Trouble in the EU

In 2022, the Bank of England and FCA reported that 79% of UK financial services organisations used machine learning, with 14% deeming it essential. The “black box” problem, which involves algorithmic decision-making without transparency or accountability, is a major issue.

According to regulators, AI may increase systemic risks, including flash crashes, market manipulation by deepfakes, and convergent models causing digital cooperation. The industry has promised improved ‘explainability’ in how AI is used for decision-making, but this remains elusive, and regulators may fall victim to automation bias when overusing AI systems.

Scott Dawson from DECTA, a payment solutions provider, suggests that while transparency appears advantageous in theory, financial institutions often hide certain elements of their processes for legitimate reasons.

He cited fraud prevention as an example where more transparency about how financial services firms use AI systems could be counterproductive: “Telling the world what they are looking for would only make them less effective, leading to fraud.”

Another issue is algorithmic prejudice. AI in credit risk management can make loans harder to get or worsen their terms for marginalised groups. The EU’s planned Financial Data Access law, which allows financial institutions to exchange consumer data with third parties, may hurt vulnerable borrowers.

The EU AI Act classifies banks’ AI-based creditworthiness operations and life and health insurance pricing and risk assessments as high-risk activities, requiring them to comply with stricter regulations.

“New ethical challenges are triggering unintended biases, forcing the industry to reflect on the ethics of new models and think about evolving towards a new, common code of conduct for all financial institutions,” said Dun & Bradstreet head of banking and financial services, Sara de la Torre.

The platform’s proprietors responded by allowing artists to opt out and protect their IP.

Such legal action has raised the question of who owns AI-generated material—AI platforms, downstream providers, content creators, or users. Solutions include paying content creators, sharing revenue, and using open-source data.

EIP counsel Ellen Keenan-O’Malley said, “In the short term, I expect organisations to place greater reliance on contractual provisions, such as a broad intellectual property indemnity against third-party claims for infringement.”

Only the European Union has adopted a clear position; the AI Act requires model providers to take ‘adequate measures’ to safeguard copyright, including releasing full training data summaries and copyright rules. Synopsys data specialist Curtis Wilson said banning copyrighted photos for AI training will prevent AIs from mass-producing custom art.

However, the expert commented, “But it would also ban image classification AI that detects cancerous tumours.”

Europe and China want a piece of America’s tech superiority, making AI deployment geopolitical. Because AI models are growing so quickly and different approaches are used in different major economies, only bilateral agreements can work. For this reason, the tech industry thinks that global regulatory frameworks are too optimistic.

A recent Biden-Xi conference agreed to begin talks without specifics. Following a similar US-UK agreement to reduce regulatory divergence, the EU and US have agreed to strengthen AI-based technology cooperation, focusing on safety and governance.

Delivered during the first global AI summit in November at the United Kingdom’s Bletchley Park, the Bletchley Declaration called for international cooperation to mitigate AI concerns. Action has not yet followed. As politicians and tech businesses face the same headwinds that are fragmenting the global economy in an era of increasing deglobalization, unified AI regulation seems unlikely.

The EU has set the global AI standards with horizontal, and some say overly strict, rules for AI systems; the US, hampered by pre-election polarisation and the success of its AI firms, has taken a ‘wait-and-see’ approach that gives the tech industry a free hand; and China, as usual, censors domestically while trying to influence the global regulatory framework.

Morgan Wright, Chief Security Advisor at SentinelOne, an AI-powered cybersecurity platform, said, “The challenge going forward is not allowing China to dictate what standards are or promote policies regulating AI that favour them over everyone else.”

However, keeping up with technology is harder. If talkative chatbots surprised the world in 2022, the next waves of AI-powered innovation have left experts dumbfounded by their disruptive potential.

“The field is moving so fast, I am not sure that even venture capital firms not deeply immersed in the field for the last decade fully understand AI and its implications,” said Fluent Ventures founder Alexandre Lazarow.

According to Plotinsky from Morgan, Lewis & Bockius, regulators may be at a disadvantage.

He said, “The technology has evolved too rapidly for lawmakers and their staff to fully comprehend both the underlying technology and the related policy issues.”

The rapid growth of AI technology has created a complex challenge for regulators worldwide, with varying approaches emerging in the EU, US, and China. While the EU has implemented the AI Act to set global standards, its broad and stringent regulations have raised concerns among startups and investors. In contrast, the US takes a more cautious, innovation-driven stance, creating regulatory uncertainty. China, balancing innovation with tight censorship, seeks to influence global AI governance.

As AI technology advances quickly, international cooperation and adaptable regulatory frameworks are crucial. The future of AI regulation will likely hinge on finding a balance between fostering innovation and addressing the emerging risks of AI, with each region contributing its own approach to the global conversation.

What's New

Osman Ibrahim: Leading Rawabi to new heights

IFM Correspondent

Will global trade be impacted by Middle East crisis?

IFM Correspondent

Consumers will bear the burden of new tariffs: Professor Jason Reed

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.