Meta, the parent company of Instagram, Facebook, and WhatsApp, is a quintessential part of our lives, helping us connect with our loved ones, apart from networking efficiently. Most of us are hooked on our devices partly because of Meta’s dopamine addiction hamster wheel. Despite the myriad reasons for harm, Meta claims to be a force for good and is genuinely useful to people around the world, and the market rewards it for it.
In 2024, Meta Platforms reported revenue of $164.50 billion. As of September 30, 2025, the social media giant’s revenue was approximately $189.46 billion. It’s a titan of industry that shareholders love, and that loves its shareholders. But the excessive love of shareholders is the root of all corporate sin.
Despite its skyrocketing revenue and incredible technological prowess, Meta doesn’t think it should regulate its market or protect its customers from fraud and harm. The digital advertising ecosystem, once heralded as a democratisation of commercial reach, has metastasised into a complex marketplace where the distinctions between legitimate commerce and predatory fraud are increasingly obscured by algorithmic opacity.
Internal projections for the fiscal year 2024 indicate that advertisements promoting scams, illegal goods, and prohibited content generated approximately $16 billion, representing roughly 10% of the company’s total annual revenue. This revenue is safeguarded by a penalty bid pricing mechanism that monetises high-risk advertisers rather than removing them, a policy framework that sets enforcement thresholds at a staggering 95% certainty level and a corporate governance structure that explicitly caps revenue losses from safety enforcement at a fraction of the profits generated by the fraud.
So, what does this mean? Meta will even let bad actors sell horse dung or magic remedies if they are willing to pay a premium for their risky endeavour. While the company has long faced scrutiny regarding data privacy and political influence, investigations surfacing in late 2024 and throughout 2025 have illuminated a far more tangible structural crisis: the institutionalisation of revenue derived from fraudulent advertising.
What’s really happening?
In November 2025, a Reuters investigation, corroborated by a cache of internal documents spanning 2021 to 2025, revealed a stark internal projection. Meta anticipated $16 billion in revenue for 2024, specifically from ads for scams and banned goods. To contextualise this figure, $16 billion exceeds the annual revenue of major global entities such as Spotify or eBay (Fortune 500 companies). It is a sum that materially impacts the company’s earnings per share and, consequently, its stock valuation.
This revenue stream is categorised internally under various euphemisms, including “violating revenue” or segments associated with higher legal risk. The existence of such specific forecasting line items indicates that this revenue is not accidental. Financial modelling that explicitly accounts for illicit revenue suggests a fiduciary dependency; removing this revenue stream would require a voluntary correction of the company’s top line by nearly 10%, a move that would likely trigger a shareholder revolt in an environment where growth in legitimate user acquisition has plateaued.
To put things into context, Meta shows 15 billion scam ads a day. A lesser entity would be penalised and shut down in most countries, but the mighty titan of the digital industry has thus far been immune to its amoral position on the safety of its consumers. Upper management at Meta does not care if an online casino, a pump-and-dump investment scheme, fake websites, or purveyors of illegal drugs flood their platform with misleading ads, as long as their pockets are full.
After the Reuters investigation and some high-profile cases against it globally, most notably the Calise vs Meta lawsuit and the Brazil AGU lawsuit, the company is trying its best at crisis management.
Calise vs Meta is a class-action lawsuit in the Ninth Circuit pursuing claims of unjust enrichment, arguing that Meta actively solicited and profited from third-party fraud and thus should disgorge the revenue. The Brazilian Attorney General’s Office has also filed suit to recover revenue from 1,770 specific fraudulent ads that used government symbols to scam citizens, demanding that the funds be deposited into a rights defence fund. Something similar is happening in the United Kingdom as well. Regulators in the European country found that Meta platforms were involved in 54% of all authorised push payment scams (where users are tricked into sending money).
The Instagram parent company says only 10% of its revenue came from scams in 2024 and aims to cut it to 7.3% in 2025 and 5.8% by 2027. The claim seems absurd. They have the tools to stop it now, but choose to roll it out slowly to protect their profits and please shareholders.
Of the $16 billion ad revenue they received from bad actors, $7 billion was from higher-risk parties (possibly extremely dubious or problematic). It is ironic because Meta’s own system files it as such. The most critical insight from the internal disclosures is the calculated decision to tolerate this revenue stream based on a comparison with potential regulatory penalties.
The documents suggest a stark cost-benefit analysis. While the revenue from scam ads is estimated at nearly $7 billion annually, the company’s internal risk models projected that regulatory fines for these violations would likely cap at around $1 billion. Instead of punishing or deplatforming, they merely charge a higher fee from these individuals and organisations.
It’s important to keep in mind that Meta is partly responsible for one-third of all successful scams in the US today. Worldwide, the total cost of ad fraud was estimated at $81 billion in 2022 and was expected to surpass $100 billion in 2023, showing that current measures aren’t keeping up with increasingly sophisticated scams.
Furthermore, internal memos revealed the existence of revenue guardrails for safety teams. In one specific instance, a fraud prevention initiative was restricted to actions that would not reduce total ad revenue by more than 0.15% (approximately $135 million).
This explicit capping of safety measures based on revenue impact demonstrates that the risk premium is a protected income stream, insulated from the full force of the company’s own trust and safety capabilities.
Who is profiting and how?
The digital advertising ecosystem, once heralded as a precision instrument for commercial democratisation, has metamorphosed into a complex adversarial theatre where the economic interests of platforms and the operational methodologies of fraudsters have become dangerously aligned. These systems prioritise engagement metrics such as Click-Through Rate and Estimated Action Rate (EAR) over content veracity, creating a fertile substrate where fraudulent actors do not merely survive but thrive.
At the core of the ad delivery engine lies the auction formula, a mathematical arbiter that decides which advertisement is shown to a user at any given millisecond. You don’t win the bid with money on platforms like Google, Facebook, or Instagram; you win it with a combination of ad quality and EAR.
When a fraudster runs a campaign promising “Guaranteed 500% Returns in 24 Hours” or “Miracle Weight Loss Without Dieting,” users interact with these ads at high rates. The algorithm, blind to the veracity of the claim and optimising strictly for the probability of action, registers this high interaction as a signal of quality and relevance. Consequently, the auction mechanism rewards the fraudster with a higher EAR, which inversely lowers their Cost Per Mille or Cost Per Click.
In effect, the platform’s efficiency algorithms subsidise the distribution of scam content, allowing fraudsters to reach vast audiences at a fraction of the cost paid by legitimate brands.
The digital ad fraud ecosystem has matured into a sophisticated business-to-business economy. While the end-point scammers running fake crypto exchanges or counterfeit e-commerce stores bear the operational risk, a vast shadow supply chain of service providers extracts guaranteed profits at every stage of the fraudulent lifecycle. These entities operate with the efficiency of legitimate SaaS (Software-as-a-Service) companies, often earning monthly recurring revenue (MRR) regardless of whether the scammer’s campaign succeeds or fails.
The primary beneficiaries are vendors of evasion technology. Cloaking services, which filter traffic to hide malicious landing pages from platform moderators, have evolved into subscription-based platforms. Services like “TrafficArmor” and “Cloaking House” operate openly, charging tiered monthly fees ranging from $30 to $600, or utilising pay-per-click models where scammers pay premium rates (e.g., $129 for 32,500 clicks) to ensure their ads survive automated review. These companies profit by effectively selling invisibility, creating a technological tollbooth that every high-end fraudster must pay to access the audience.
Supporting this is the Bulletproof Hosting industry. Unlike legitimate hosts that comply with takedown requests, providers like Strox or SpeedHost247 charge premiums (e.g., $85/month or $3/day) to host malicious landing pages on servers explicitly designed to ignore abuse reports and law enforcement inquiries. By commoditising resilience, they ensure that even when a scam is detected, the infrastructure remains operational long enough to be profitable.
Fraud requires a constant supply of fresh identities to bypass platform bans. This has enriched Dark Web marketplaces and account brokers, who act as wholesalers of digital reputation. The most lucrative commodities are Verified Business Managers who hack or farm Facebook/Meta ad accounts with high spending limits and histories of legitimate activity. A verified BM can fetch $120 to $250, while aged accounts (which look less suspicious to algorithms) sell for $45–$50.
This sector also profits from the Stolen Credit model. Brokers sell stolen credit card details for as little as $10–$40, which fraudsters then link to compromised agency accounts. This arbitrage allows scammers to run thousands of dollars in ads using other people’s money, while the identity brokers secure risk-free profit from the initial data sale.
Perhaps the most significant evolution is the shift to Scam-as-a-Service (ScaaS). Technical syndicates now build and lease entire fraud kits (pre-coded phishing sites, crypto drainer scripts, and back-end management panels) to lower-level criminals.
“Instead of charging a flat fee, these developers often take a commission. For instance, the Inferno Drainer malware operated on a 20% commission model, syphoning off a fifth of all stolen funds from its affiliates, generating over $87 million in illicit profit before ceasing operations. This franchise model allows technical groups to scale their revenue infinitely without ever directly engaging with a victim,” said Reuters journalist Jeff Horwitz, who has been covering the alleged ad-related irregularities involving Meta.
Finally, the demand for human engagement signals has created a labour economy in Southeast Asia (e.g., Vietnam, Myanmar) and parts of Eastern Europe. “Click Farms” or “Fraud Farms” employ low-wage workers to manually interact with ads, solve CAPTCHAs, and warm up accounts.
“These operations charge roughly $1 per 1,000 clicks/likes, creating a volume-based revenue stream that exploits global wage disparities to defeat advanced behavioural biometrics. By providing the human touch that algorithms crave, these farms monetise the very mechanism designed to stop them,” Horwitz said.
And it doesn’t stop there. The data collected at these farms is often resold. If you’ve been the victim of a cybercrime, there’s a 34% chance it will happen again if you’re an individual, and an 84% chance if you’re a business. Once scammed, you can end up on what’s called a ‘suckers list,’ marking you as an easy target. These lists are valuable, and people are willing to pay a lot to get them.
How is the world reacting to it?
The world is reacting to the industrialisation of ad fraud with a shift from “user beware” to platform liability. In 2024 and 2025, governments and industries moved to dismantle the economic impunity of platforms, forcing them to bear the costs of the fraud they facilitate.
The most significant development is the regulatory move to force reimbursement. For example, the UK Payment Systems Regulator implemented in 2024 a mandatory reimbursement requirement for Authorised Push Payment (APP) fraud. Crucially, the liability is now split 50:50 between the sending bank and the receiving payment service provider.
While this primarily targets banks, it has created immense pressure from the financial sector on tech platforms. Banks, now on the hook for millions in refunds, are aggressively lobbying for a “polluter pays” model, arguing that since 60–80% of scams originate on Meta’s platforms, the tech giants should contribute to the reimbursement pot.
Effective December 2024, Singapore’s framework assigns specific duties to financial institutions and telcos to mitigate phishing scams. If banks fail to send real-time transaction alerts or impose cooling-off periods, they are liable for losses. This creates a regulatory precedent where infrastructure providers are held financially accountable for gatekeeping failures. Governments are moving beyond voluntary codes of conduct to enforceable legislation with massive financial penalties.
The “UK Online Safety Act,” fully enforceable in 2025, requires platforms to proactively prevent fraudulent advertising. Non-compliance can result in fines of up to £18 million or 10% of global annual turnover (potentially billions for Meta).
In Europe, something similar is happening with the “Digital Services Act.” The European Commission has opened investigations into “Very Large Online Platforms” regarding their risk mitigation for fraudulent ads. The DSA empowers the European Union to fine companies up to 6% of their global turnover if they fail to manage systemic risks, including the spread of financial scams.
In Australia, the “Scams Prevention Framework,” which was passed in early 2025, introduces mandatory codes for banks, telcos, and digital platforms. It includes fines of up to AUD 50 million for non-compliance, specifically targeting the failure to detect and remove scam content.
There is also other litigation from celebrities. For example, Andrew Forrest vs Meta is an ongoing case where Australian billionaire Andrew Forrest pursued Meta in both Australian and US courts over the proliferation of crypto scams using his likeness. While the Australian criminal case was dropped due to evidential hurdles, the US civil lawsuit survived a motion to dismiss in 2024.
This case is pivotal as it challenges Section 230 immunity often claimed by platforms, arguing that Meta’s ad tools contributed to the content creation, thereby stripping them of neutral publisher status.
Even the Australian Competition and Consumer Commission sued Meta for aiding and abetting false conduct by publishing scam ads featuring public figures, arguing that Meta’s algorithms actively targeted these scams to susceptible users.
Meta has, under immense pressure, reversed its 2021 decision to abandon facial recognition. In late 2024, the company began testing facial recognition technology to combat “celeb-bait” scams. The system compares faces in suspected ads against the profile pictures of public figures.
If a match is found and the ad is a scam, it is blocked. This marks a significant concession, as it acknowledges that privacy concerns regarding biometrics are outweighed by the need to stop the financial bleeding caused by industrial-scale fraud.
Major players like Meta, Coinbase, and Match Group have formed coalitions to share intelligence on pig-butchering operations, aiming to sever the communication lines between the scam compounds and their victims.
Engagement fuels fraud risks
This is the aftermath of prioritising engagement over verification. You end up with an ecosystem where scams and fraud flourish, and customers get hurt. At the heart of this crisis lies the EAR algorithm, a mechanism that inadvertently subsidises deception by rewarding the hyper-engaging nature of scams with lower distribution costs. This economic alignment between the platform’s profit motives and the fraudster’s operational goals has created a “Market for Lemons,” where predatory content effectively crowds out legitimate commerce.
The “Retargeting Loop” further exacerbates this by trapping vulnerable populations in algorithmic echo chambers, commoditising their susceptibility, and reselling it through the secondary market of recovery scams.
Technologically, the ecosystem has evolved into an asymmetric arms race, where enforcement is consistently outpaced by evasion. The transition from simple static landing pages to Generation 4 cloaking technologies, which are capable of analysing device telemetry, battery status, and gyroscopic movements in milliseconds, demonstrates that fraud is no longer the domain of opportunistic amateurs. It has industrialised into a sophisticated Fraud-as-a-Service economy. This shadow supply chain, composed of bulletproof hosting providers, identity brokers on the dark web, and commercial cloaking services, operates with the efficiency of the legitimate software sector.
By lowering the technical barrier to entry, these enablers have democratised access to high-end evasion tools, allowing even low-skilled actors to launch enterprise-grade attacks against global platforms.
The failure of self-regulation is now evident in the global legislative pivot toward platform liability. For over a decade, the industry operated under a “user beware” paradigm, but the sheer scale of financial loss has forced a regulatory correction. Initiatives like the United Kingdom’s mandatory reimbursement requirement and Singapore’s “Shared Responsibility Framework” signal the end of platform immunity.
By shifting the financial burden of fraud from the victim to the infrastructure providers, regulators are attempting to realign economic incentives. Only when the cost of hosting a scam exceeds the revenue generated from its ads will platforms invest the necessary resources to close the technological loopholes they currently tolerate.
Ultimately, the future of the digital advertising economy hinges on a fundamental shift from plausible deniability to mandatory verification. The era of anonymous algorithmic bidding must yield to a “Know Your Business” standard, where access to the ad auction is predicated on verified identity rather than mere creditworthiness.
As Generative AI threatens to flood the web with infinite synthetic content, the only viable defence is a strict chain of custody for digital identity. If structural reform doesn’t ensue soon, corporate social media platforms will slowly transform into a black market without oversight.
The world is reacting, but laws are struggling to keep up with fast-moving algorithms. For now, as a reader and consumer, be careful, any ad you see on Instagram or Facebook could be a scam, backed by Meta Platforms, the world’s biggest advertiser.
