<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Scammers Archives - International Finance</title>
	<atom:link href="https://internationalfinance.com/tag/scammers/feed/" rel="self" type="application/rss+xml" />
	<link>https://internationalfinance.com/tag/scammers/</link>
	<description>International Finance - Financial News, Magazine and Awards</description>
	<lastBuildDate>Fri, 16 Jan 2026 13:16:48 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Meta lets scammers pay to play</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=meta-lets-scammers-pay-to-play</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 14:52:10 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[advertising]]></category>
		<category><![CDATA[banks]]></category>
		<category><![CDATA[Digital Advertising]]></category>
		<category><![CDATA[economy]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Instagram]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[payment]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[shareholders]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54462</guid>

					<description><![CDATA[<p>It's important to keep in mind that Meta is partly responsible for one-third of all successful scams in the US today</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/">Meta lets scammers pay to play</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Meta, the parent company of Instagram, Facebook, and WhatsApp, is a quintessential part of our lives, helping us connect with our loved ones, apart from networking efficiently. Most of us are hooked on our devices partly because of Meta&#8217;s dopamine addiction hamster wheel. Despite the myriad reasons for harm, Meta claims to be a force for good and is genuinely useful to people around the world, and the market rewards it for it.</p>
<p>In 2024, Meta Platforms reported revenue of $164.50 billion. As of September 30, 2025, the social media giant’s revenue was approximately $189.46 billion. It&#8217;s a titan of industry that shareholders love, and that loves its shareholders. But the excessive love of shareholders is the root of all corporate sin.</p>
<p>Despite its skyrocketing revenue and incredible technological prowess, Meta doesn&#8217;t think it should regulate its market or protect its customers from fraud and harm. The digital advertising ecosystem, once heralded as a democratisation of commercial reach, has metastasised into a complex marketplace where the distinctions between legitimate commerce and predatory fraud are increasingly obscured by algorithmic opacity.</p>
<p>Internal projections for the fiscal year 2024 indicate that advertisements promoting scams, illegal goods, and prohibited content generated approximately $16 billion, representing roughly 10% of the company&#8217;s total annual revenue. This revenue is safeguarded by a penalty bid pricing mechanism that monetises high-risk advertisers rather than removing them, a policy framework that sets enforcement thresholds at a staggering 95% certainty level and a corporate governance structure that explicitly caps revenue losses from safety enforcement at a fraction of the profits generated by the fraud.</p>
<p>So, what does this mean? Meta will even let bad actors sell horse dung or magic remedies if they are willing to pay a premium for their risky endeavour. While the company has long faced scrutiny regarding data privacy and political influence, investigations surfacing in late 2024 and throughout 2025 have illuminated a far more tangible structural crisis: the institutionalisation of revenue derived from fraudulent advertising.</p>
<p><strong>What&#8217;s really happening?</strong></p>
<p>In November 2025, a Reuters investigation, corroborated by a cache of internal documents spanning 2021 to 2025, revealed a stark internal projection. Meta anticipated $16 billion in revenue for 2024, specifically from ads for scams and banned goods. To contextualise this figure, $16 billion exceeds the annual revenue of major global entities such as Spotify or eBay (Fortune 500 companies). It is a sum that materially impacts the company&#8217;s earnings per share and, consequently, its stock valuation.</p>
<p>This revenue stream is categorised internally under various euphemisms, including &#8220;violating revenue&#8221; or segments associated with higher legal risk. The existence of such specific forecasting line items indicates that this revenue is not accidental. Financial modelling that explicitly accounts for illicit revenue suggests a fiduciary dependency; removing this revenue stream would require a voluntary correction of the company’s top line by nearly 10%, a move that would likely trigger a shareholder revolt in an environment where growth in legitimate user acquisition has plateaued.</p>
<p>To put things into context, Meta shows 15 billion scam ads a day. A lesser entity would be penalised and shut down in most countries, but the mighty titan of the digital industry has thus far been immune to its amoral position on the safety of its consumers. Upper management at Meta does not care if an online casino, a pump-and-dump investment scheme, fake websites, or purveyors of illegal drugs flood their platform with misleading ads, as long as their pockets are full.</p>
<p>After the Reuters investigation and some high-profile cases against it globally, most notably the Calise vs Meta lawsuit and the Brazil AGU lawsuit, the company is trying its best at crisis management.</p>
<p>Calise vs Meta is a class-action lawsuit in the Ninth Circuit pursuing claims of unjust enrichment, arguing that Meta actively solicited and profited from third-party fraud and thus should disgorge the revenue. The Brazilian Attorney General’s Office has also filed suit to recover revenue from 1,770 specific fraudulent ads that used government symbols to scam citizens, demanding that the funds be deposited into a rights defence fund. Something similar is happening in the United Kingdom as well. Regulators in the European country found that Meta platforms were involved in 54% of all authorised push payment scams (where users are tricked into sending money).</p>
<p>The Instagram parent company says only 10% of its revenue came from scams in 2024 and aims to cut it to 7.3% in 2025 and 5.8% by 2027. The claim seems absurd. They have the tools to stop it now, but choose to roll it out slowly to protect their profits and please shareholders.</p>
<p>Of the $16 billion ad revenue they received from bad actors, $7 billion was from higher-risk parties (possibly extremely dubious or problematic). It is ironic because Meta&#8217;s own system files it as such. The most critical insight from the internal disclosures is the calculated decision to tolerate this revenue stream based on a comparison with potential regulatory penalties.</p>
<p>The documents suggest a stark cost-benefit analysis. While the revenue from scam ads is estimated at nearly $7 billion annually, the company’s internal risk models projected that regulatory fines for these violations would likely cap at around $1 billion. Instead of punishing or deplatforming, they merely charge a higher fee from these individuals and organisations.</p>
<p>It&#8217;s important to keep in mind that Meta is partly responsible for one-third of all successful scams in the US today. Worldwide, the total cost of ad fraud was estimated at $81 billion in 2022 and was expected to surpass $100 billion in 2023, showing that current measures aren’t keeping up with increasingly sophisticated scams.</p>
<p>Furthermore, internal memos revealed the existence of revenue guardrails for safety teams. In one specific instance, a fraud prevention initiative was restricted to actions that would not reduce total ad revenue by more than 0.15% (approximately $135 million).</p>
<p>This explicit capping of safety measures based on revenue impact demonstrates that the risk premium is a protected income stream, insulated from the full force of the company’s own trust and safety capabilities.</p>
<p><strong>Who is profiting and how?</strong></p>
<p>The digital advertising ecosystem, once heralded as a precision instrument for commercial democratisation, has metamorphosed into a complex adversarial theatre where the economic interests of platforms and the operational methodologies of fraudsters have become dangerously aligned. These systems prioritise engagement metrics such as Click-Through Rate and Estimated Action Rate (EAR) over content veracity, creating a fertile substrate where fraudulent actors do not merely survive but thrive.</p>
<p>At the core of the ad delivery engine lies the auction formula, a mathematical arbiter that decides which advertisement is shown to a user at any given millisecond. You don’t win the bid with money on platforms like Google, Facebook, or Instagram; you win it with a combination of ad quality and EAR.</p>
<p>When a fraudster runs a campaign promising &#8220;Guaranteed 500% Returns in 24 Hours&#8221; or &#8220;Miracle Weight Loss Without Dieting,&#8221; users interact with these ads at high rates. The algorithm, blind to the veracity of the claim and optimising strictly for the probability of action, registers this high interaction as a signal of quality and relevance. Consequently, the auction mechanism rewards the fraudster with a higher EAR, which inversely lowers their Cost Per Mille or Cost Per Click.</p>
<p>In effect, the platform’s efficiency algorithms subsidise the distribution of scam content, allowing fraudsters to reach vast audiences at a fraction of the cost paid by legitimate brands.</p>
<p>The digital ad fraud ecosystem has matured into a sophisticated business-to-business economy. While the end-point scammers running fake crypto exchanges or counterfeit e-commerce stores bear the operational risk, a vast shadow supply chain of service providers extracts guaranteed profits at every stage of the fraudulent lifecycle. These entities operate with the efficiency of legitimate SaaS (Software-as-a-Service) companies, often earning monthly recurring revenue (MRR) regardless of whether the scammer’s campaign succeeds or fails.</p>
<p>The primary beneficiaries are vendors of evasion technology. Cloaking services, which filter traffic to hide malicious landing pages from platform moderators, have evolved into subscription-based platforms. Services like “TrafficArmor” and “Cloaking House” operate openly, charging tiered monthly fees ranging from $30 to $600, or utilising pay-per-click models where scammers pay premium rates (e.g., $129 for 32,500 clicks) to ensure their ads survive automated review. These companies profit by effectively selling invisibility, creating a technological tollbooth that every high-end fraudster must pay to access the audience.</p>
<p>Supporting this is the Bulletproof Hosting industry. Unlike legitimate hosts that comply with takedown requests, providers like Strox or SpeedHost247 charge premiums (e.g., $85/month or $3/day) to host malicious landing pages on servers explicitly designed to ignore abuse reports and law enforcement inquiries. By commoditising resilience, they ensure that even when a scam is detected, the infrastructure remains operational long enough to be profitable.</p>
<p>Fraud requires a constant supply of fresh identities to bypass platform bans. This has enriched Dark Web marketplaces and account brokers, who act as wholesalers of digital reputation. The most lucrative commodities are Verified Business Managers who hack or farm Facebook/Meta ad accounts with high spending limits and histories of legitimate activity. A verified BM can fetch $120 to $250, while aged accounts (which look less suspicious to algorithms) sell for $45–$50.</p>
<p>This sector also profits from the Stolen Credit model. Brokers sell stolen credit card details for as little as $10–$40, which fraudsters then link to compromised agency accounts. This arbitrage allows scammers to run thousands of dollars in ads using other people&#8217;s money, while the identity brokers secure risk-free profit from the initial data sale.</p>
<p>Perhaps the most significant evolution is the shift to Scam-as-a-Service (ScaaS). Technical syndicates now build and lease entire fraud kits (pre-coded phishing sites, crypto drainer scripts, and back-end management panels) to lower-level criminals.</p>
<p>“Instead of charging a flat fee, these developers often take a commission. For instance, the Inferno Drainer malware operated on a 20% commission model, syphoning off a fifth of all stolen funds from its affiliates, generating over $87 million in illicit profit before ceasing operations. This franchise model allows technical groups to scale their revenue infinitely without ever directly engaging with a victim,” said Reuters journalist Jeff Horwitz, who has been covering the alleged ad-related irregularities involving Meta.</p>
<p>Finally, the demand for human engagement signals has created a labour economy in Southeast Asia (e.g., Vietnam, Myanmar) and parts of Eastern Europe. “Click Farms” or “Fraud Farms” employ low-wage workers to manually interact with ads, solve CAPTCHAs, and warm up accounts.</p>
<p>“These operations charge roughly $1 per 1,000 clicks/likes, creating a volume-based revenue stream that exploits global wage disparities to defeat advanced behavioural biometrics. By providing the human touch that algorithms crave, these farms monetise the very mechanism designed to stop them,” Horwitz said.</p>
<p>And it doesn’t stop there. The data collected at these farms is often resold. If you’ve been the victim of a cybercrime, there’s a 34% chance it will happen again if you’re an individual, and an 84% chance if you’re a business. Once scammed, you can end up on what’s called a ‘suckers list,’ marking you as an easy target. These lists are valuable, and people are willing to pay a lot to get them.</p>
<p><strong>How is the world reacting to it?</strong></p>
<p>The world is reacting to the industrialisation of ad fraud with a shift from “user beware” to platform liability. In 2024 and 2025, governments and industries moved to dismantle the economic impunity of platforms, forcing them to bear the costs of the fraud they facilitate.</p>
<p>The most significant development is the regulatory move to force reimbursement. For example, the UK Payment Systems Regulator implemented in 2024 a mandatory reimbursement requirement for Authorised Push Payment (APP) fraud. Crucially, the liability is now split 50:50 between the sending bank and the receiving payment service provider.</p>
<p>While this primarily targets banks, it has created immense pressure from the financial sector on tech platforms. Banks, now on the hook for millions in refunds, are aggressively lobbying for a “polluter pays” model, arguing that since 60–80% of scams originate on Meta&#8217;s platforms, the tech giants should contribute to the reimbursement pot.</p>
<p>Effective December 2024, Singapore’s framework assigns specific duties to financial institutions and telcos to mitigate phishing scams. If banks fail to send real-time transaction alerts or impose cooling-off periods, they are liable for losses. This creates a regulatory precedent where infrastructure providers are held financially accountable for gatekeeping failures. Governments are moving beyond voluntary codes of conduct to enforceable legislation with massive financial penalties.</p>
<p>The “UK Online Safety Act,” fully enforceable in 2025, requires platforms to proactively prevent fraudulent advertising. Non-compliance can result in fines of up to £18 million or 10% of global annual turnover (potentially billions for Meta).</p>
<p>In Europe, something similar is happening with the “Digital Services Act.” The European Commission has opened investigations into “Very Large Online Platforms” regarding their risk mitigation for fraudulent ads. The DSA empowers the European Union to fine companies up to 6% of their global turnover if they fail to manage systemic risks, including the spread of financial scams.</p>
<p>In Australia, the “Scams Prevention Framework,” which was passed in early 2025, introduces mandatory codes for banks, telcos, and digital platforms. It includes fines of up to AUD 50 million for non-compliance, specifically targeting the failure to detect and remove scam content.</p>
<p>There is also other litigation from celebrities. For example, Andrew Forrest vs Meta is an ongoing case where Australian billionaire Andrew Forrest pursued Meta in both Australian and US courts over the proliferation of crypto scams using his likeness. While the Australian criminal case was dropped due to evidential hurdles, the US civil lawsuit survived a motion to dismiss in 2024.</p>
<p>This case is pivotal as it challenges Section 230 immunity often claimed by platforms, arguing that Meta’s ad tools contributed to the content creation, thereby stripping them of neutral publisher status.</p>
<p>Even the Australian Competition and Consumer Commission sued Meta for aiding and abetting false conduct by publishing scam ads featuring public figures, arguing that Meta&#8217;s algorithms actively targeted these scams to susceptible users.</p>
<p>Meta has, under immense pressure, reversed its 2021 decision to abandon facial recognition. In late 2024, the company began testing facial recognition technology to combat “celeb-bait” scams. The system compares faces in suspected ads against the profile pictures of public figures.</p>
<p>If a match is found and the ad is a scam, it is blocked. This marks a significant concession, as it acknowledges that privacy concerns regarding biometrics are outweighed by the need to stop the financial bleeding caused by industrial-scale fraud.</p>
<p>Major players like Meta, Coinbase, and Match Group have formed coalitions to share intelligence on pig-butchering operations, aiming to sever the communication lines between the scam compounds and their victims.</p>
<p><strong>Engagement fuels fraud risks</strong></p>
<p>This is the aftermath of prioritising engagement over verification. You end up with an ecosystem where scams and fraud flourish, and customers get hurt. At the heart of this crisis lies the EAR algorithm, a mechanism that inadvertently subsidises deception by rewarding the hyper-engaging nature of scams with lower distribution costs. This economic alignment between the platform&#8217;s profit motives and the fraudster&#8217;s operational goals has created a “Market for Lemons,” where predatory content effectively crowds out legitimate commerce.</p>
<p>The “Retargeting Loop” further exacerbates this by trapping vulnerable populations in algorithmic echo chambers, commoditising their susceptibility, and reselling it through the secondary market of recovery scams.</p>
<p>Technologically, the ecosystem has evolved into an asymmetric arms race, where enforcement is consistently outpaced by evasion. The transition from simple static landing pages to Generation 4 cloaking technologies, which are capable of analysing device telemetry, battery status, and gyroscopic movements in milliseconds, demonstrates that fraud is no longer the domain of opportunistic amateurs. It has industrialised into a sophisticated Fraud-as-a-Service economy. This shadow supply chain, composed of bulletproof hosting providers, identity brokers on the dark web, and commercial cloaking services, operates with the efficiency of the legitimate software sector.</p>
<p>By lowering the technical barrier to entry, these enablers have democratised access to high-end evasion tools, allowing even low-skilled actors to launch enterprise-grade attacks against global platforms.</p>
<p>The failure of self-regulation is now evident in the global legislative pivot toward platform liability. For over a decade, the industry operated under a “user beware” paradigm, but the sheer scale of financial loss has forced a regulatory correction. Initiatives like the United Kingdom’s mandatory reimbursement requirement and Singapore’s “Shared Responsibility Framework” signal the end of platform immunity.</p>
<p>By shifting the financial burden of fraud from the victim to the infrastructure providers, regulators are attempting to realign economic incentives. Only when the cost of hosting a scam exceeds the revenue generated from its ads will platforms invest the necessary resources to close the technological loopholes they currently tolerate.</p>
<p>Ultimately, the future of the digital advertising economy hinges on a fundamental shift from plausible deniability to mandatory verification. The era of anonymous algorithmic bidding must yield to a “Know Your Business” standard, where access to the ad auction is predicated on verified identity rather than mere creditworthiness.</p>
<p>As Generative AI threatens to flood the web with infinite synthetic content, the only viable defence is a strict chain of custody for digital identity. If structural reform doesn’t ensue soon, corporate social media platforms will slowly transform into a black market without oversight.</p>
<p>The world is reacting, but laws are struggling to keep up with fast-moving algorithms. For now, as a reader and consumer, be careful, any ad you see on Instagram or Facebook could be a scam, backed by Meta Platforms, the world’s biggest advertiser.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/">Meta lets scammers pay to play</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deepfake fallout: Welcome to the age of paranoia</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=deepfake-fallout-welcome-to-the-age-of-paranoia</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Wed, 13 Aug 2025 07:47:15 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Deepfake]]></category>
		<category><![CDATA[email]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[Social Engineering]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=53207</guid>

					<description><![CDATA[<p>In Hong Kong, a financial worker was tricked into paying out $25 million when fraudsters used deepfake technology to impersonate the company’s CFO</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/">Deepfake fallout: Welcome to the age of paranoia</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="ai-optimize-54 ai-optimize-introduction"><span data-preserver-spaces="true">In 2025, reports emerged about cybercriminals using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data.</span></p>
<p class="ai-optimize-55"><span data-preserver-spaces="true">According to the FBI, threat actors have been contacting current and former federal and state officials through fake voice and text messages claiming to be from trusted sources. These scammers then attempt to establish rapport before directing victims to malicious websites to extract passwords and other private information.</span></p>
<p class="ai-optimize-56"><span data-preserver-spaces="true">Apart from cautioning about the hackers&#8217; tendency to compromise one official’s account, the FBI believes these threat actors may use that access to impersonate the victims further and target others within their network. Verifying identities, avoiding unsolicited links, and enabling multifactor authentication to protect sensitive accounts will be even more crucial.</span></p>
<p class="ai-optimize-57"><span data-preserver-spaces="true">The FBI and cybersecurity experts </span><span data-preserver-spaces="true">are recommending</span><span data-preserver-spaces="true"> examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.</span></p>
<p class="ai-optimize-58"><strong><span data-preserver-spaces="true">An evolving threat</span></strong></p>
<p class="ai-optimize-59"><span data-preserver-spaces="true">Essentially, we are talking about scams where sophisticated AI </span><span data-preserver-spaces="true">is used to create</span> <span data-preserver-spaces="true">highly convincing</span><span data-preserver-spaces="true"> audio, images, text, or videos that look, sound, and act like real people. The easy availability of this technology practically gives fraudsters access to Hollywood-style special effects, enabling bad actors to commit deepfake fraud at scale. The World Bank reports that deepfake fraud has surged by 900% in recent years. Losses fuelled by generative AI </span><span data-preserver-spaces="true">are on track to</span><span data-preserver-spaces="true"> reach $40 billion by 2027.</span></p>
<p class="ai-optimize-60"><span data-preserver-spaces="true">Deepfake fraud has become troubling because of its highly realistic nature, accessibility to fraudsters, and scalability. Generative artificial intelligence and deepfakes are making existing types of fraud, such as new account fraud, account takeover, phishing, impersonations, and social engineering, even more costly. While voice-cloning deepfakes have successfully targeted several global businesses, video-based deepfakes are empowering criminal groups like the Yahoo Boys with compelling romance scams.</span></p>
<p class="ai-optimize-61"><span data-preserver-spaces="true">Consider this: Generative AI rapidly creates images that appear &#8216;realistic&#8217; with almost zero imperfections, eliminating telltale signs of deepfakes such as strange-looking fingers, distorted faces, or stretched-out arms. To make matters worse, using cloud computing, criminals can launch multiple attacks simultaneously or create a large volume of synthetic content for a targeted campaign, such as spear-phishing fraud.</span></p>
<p class="ai-optimize-62"><span data-preserver-spaces="true">Generative AI and deepfakes are already being incorporated into several common frauds. This includes &#8220;New Account Opening Fraud,&#8221; where criminals use deepfake technology with synthetic videos, audio, or images that appear to be a legitimate person opening a new bank account. From there, they can bypass facial recognition or liveness detection measures. By mimicking an account holder’s appearance, voice, and mannerisms, fraudsters can convince a customer service representative to grant them access to someone else’s account.</span></p>
<p class="ai-optimize-63"><span data-preserver-spaces="true">Spelling and grammar mistakes were once obvious red flags of phishing scams. However, thanks to GenAI, criminals are less likely to make these errors. Fraudsters can now craft persuasive phishing messages that are grammatically correct, contextually relevant, and have perfect spelling.</span></p>
<p class="ai-optimize-64"><span data-preserver-spaces="true">Fraudsters can also convincingly imitate individuals in professional settings, such as meetings or legal proceedings, to commit fraud. In personal settings, they can pretend to be a loved one </span><span data-preserver-spaces="true">in need of</span><span data-preserver-spaces="true"> financial or medical help, as in a romance or grandparent scam. Synthetic identities (fake identities created by combining real and fictitious information) are now appearing to look like real people. These synthetic identities are defrauding businesses and other individuals.</span></p>
<p class="ai-optimize-65"><span data-preserver-spaces="true">In Hong Kong, a financial worker was tricked into paying </span><span data-preserver-spaces="true">out</span><span data-preserver-spaces="true"> $25 million when fraudsters used deepfake technology to impersonate the company’s CFO. In Italy, a group of entrepreneurs was targeted by scammers earlier in 2025, who copied the Defence Minister Guido Crosetto’s voice and requested money to help pay the ransom of journalists kidnapped overseas.</span></p>
<p class="ai-optimize-66"><span data-preserver-spaces="true">At least one victim paid €1 million to an overseas account. WPP Digital CEO Mark Read said United Kingdom-based scammers unsuccessfully used a combination of a voice clone and YouTube footage to schedule a meeting with themselves and ad company executives in 2024.</span></p>
<p class="ai-optimize-67"><span data-preserver-spaces="true">Video-based deepfake frauds make impersonation-based fraud, like romance scams, even more difficult to catch. In 2024, American consumers lost an estimated $1.14 billion to romance scams. With deepfake technology, scammers can create a large library of fake online suitors. Aided by advanced large language models (LLMs) like LoveGPT, romance scammers can target multiple victims at the same time.</span></p>
<p class="ai-optimize-68"><span data-preserver-spaces="true">Manipulating publicly available images to commit romance scams has proven effective. In 2024, a scammer used simpler technology to deceive a French woman into believing she was in a relationship with Brad Pitt. Organised romance scam groups like the Yahoo Boys are creating more personalised communication for their targets in real time, making romance scams even more convincing and likely to succeed.</span></p>
<p class="ai-optimize-69"><span data-preserver-spaces="true">Even tech boss Elon Musk couldn&#8217;t save himself from being deepfaked. In 2024, there were reports of AI-powered videos posing as genuine footage of the Tesla and X (formerly Twitter) boss going viral. The New York Times dubbed deepfake “Musk, the Internet’s biggest scammer.”</span></p>
<p class="ai-optimize-70"><span data-preserver-spaces="true">Steve Beauchamp, an 82-year-old retiree, told the New York Times that he drained his retirement fund and invested $690,000 in such a scam over several weeks, convinced that a video he had seen of Musk was real. His money soon vanished without a trace.</span></p>
<p class="ai-optimize-71"><span data-preserver-spaces="true">“Now, whether it was AI making him say </span><span data-preserver-spaces="true">the things that</span><span data-preserver-spaces="true"> he was saying, I </span><span data-preserver-spaces="true">really</span><span data-preserver-spaces="true"> don’t know. But as far as the picture, if somebody had said, Pick him out of a lineup, that’s him. Looked just like Elon Musk, sounded just like Elon Musk, and I thought it was him,” Beauchamp told the NYT.</span></p>
<p class="ai-optimize-72"><span data-preserver-spaces="true">Deepfake-powered videos can fuel other impersonation tactics </span><span data-preserver-spaces="true">like</span><span data-preserver-spaces="true"> &#8220;CEO fraud&#8221; or grandparent scams. If the target believes they are interacting with </span><span data-preserver-spaces="true">the</span><span data-preserver-spaces="true"> real person, they are more inclined to follow their instructions to help their company or a family member.</span></p>
<p class="ai-optimize-73"><span data-preserver-spaces="true">While audio and visual manipulation have emerged as critical components behind the deepfakes&#8217; success, the rest depends on trust. Here, psychological manipulation from social engineering is working wonders for cybercriminals.</span></p>
<p class="ai-optimize-74"><span data-preserver-spaces="true">By scouring information like social media profiles, compromised data, or other sensitive information, fraudsters create specific scenarios that emotionally trigger their targets and quickly gain their attention and trust. </span><span data-preserver-spaces="true">The more detailed </span><span data-preserver-spaces="true">a story the scammer presents</span><span data-preserver-spaces="true">, the more believable it is.</span></p>
<p class="ai-optimize-75"><span data-preserver-spaces="true">Businesses and banks may see a rise in highly personalised “scams as a service” tactics. </span><span data-preserver-spaces="true">Criminals can purchase pre-configured deepfake materials for a specific target (a bank manager or executive)</span><span data-preserver-spaces="true">, in addition to accessing</span><span data-preserver-spaces="true"> information like email lists to gain intel on any financial organisation’s internal hierarchy.</span></p>
<p class="ai-optimize-76"><strong><span data-preserver-spaces="true">Money and trust </span><span data-preserver-spaces="true">getting</span><span data-preserver-spaces="true"> eroded</span></strong></p>
<p class="ai-optimize-77"><span data-preserver-spaces="true">In a 2024 Deloitte poll, 25.9% of executives revealed that their organisations had experienced one or more deepfake incidents targeting financial and accounting data in the 12 months prior, while 50% of all respondents said they expected a rise in attacks over the following 12 months.</span></p>
<p class="ai-optimize-78"><span data-preserver-spaces="true">The United States Financial Crimes Enforcement Network (FinCEN) issued an alert in 2024 to help financial institutions identify fraud schemes that use deepfake media created with GenAI tools.</span></p>
<p class="ai-optimize-79"><span data-preserver-spaces="true">The network observed </span><span data-preserver-spaces="true">an increase in</span><span data-preserver-spaces="true"> suspicious activity reports from financial institutions describing the suspected use of deepfake media in fraud schemes targeting their institutions and customers, beginning in 2023 and continuing into 2024.</span></p>
<p class="ai-optimize-80"><span data-preserver-spaces="true">Deloitte’s Centre for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States by 2027. To make matters worse, digital trust is “crumbling” under an avalanche of synthetic media, misinformation, and deepfake fraud, according to a new report from Jumio.</span></p>
<p class="ai-optimize-81"><span data-preserver-spaces="true">The firm’s fourth annual &#8220;Jumio Online Identity Study&#8221; surveyed 8,001 adult consumers split equally between the United States, Mexico, the United Kingdom, and Singapore. </span><span data-preserver-spaces="true">They have much in common: </span><span data-preserver-spaces="true">namely,</span><span data-preserver-spaces="true"> a growing fear that AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft, and a corresponding rise in </span><span data-preserver-spaces="true">skepticism</span><span data-preserver-spaces="true"> about anything and everything online.</span></p>
<p class="ai-optimize-82"><span data-preserver-spaces="true">&#8220;Fraud-as-a-service (FaaS) ecosystems have erupted like a bad rash, enabling even amateur fraudsters to leverage synthetic identities, deepfake videos, and botnet-driven account takeovers. Consumers must navigate scam emails, manipulated social media content, and digitally altered identity documents. Seven out of ten global consumers (69%) indicated they are more </span><span data-preserver-spaces="true">skeptical</span><span data-preserver-spaces="true"> of the content they see online due to AI-generated fraud than they were last year,&#8221; the report noted.</span></p>
<p class="ai-optimize-83"><span data-preserver-spaces="true">When asked who they trust most to protect their </span><span data-preserver-spaces="true">personal</span><span data-preserver-spaces="true"> data, 93% of respondents said they trust themselves over the government or Big Tech.</span></p>
<p class="ai-optimize-84"><span data-preserver-spaces="true">However, Jumio said, “Self-reliance does not mean consumers want to go it alone. </span><span data-preserver-spaces="true">In fact,</span><span data-preserver-spaces="true"> when asked who should be most responsible for stopping AI-powered fraud, 43% pointed to Big Tech, compared to just 18% who chose themselves.”</span></p>
<p class="ai-optimize-85"><span data-preserver-spaces="true">The research further showed that consumers are open to modernised fraud protection, even if it means additional steps. Most respondents globally said they would be willing to spend more time completing comprehensive identity verification processes, especially in sectors where the stakes are high, like banking or healthcare.&#8221;</span></p>
<p class="ai-optimize-86"><span data-preserver-spaces="true">But it also recognises that technology alone is not the answer. Jumio CEO Robert Prigge said, “Building a trustworthy digital world depends on strong consumer education and transparency. </span><span data-preserver-spaces="true">With </span><span data-preserver-spaces="true">day-to-day</span><span data-preserver-spaces="true"> worries about generative algorithmic technologies on the rise, the trust gap </span><span data-preserver-spaces="true">also</span><span data-preserver-spaces="true"> continues to grow proportionally.</span><span data-preserver-spaces="true"> As such, businesses must also earn consumer trust in these protections.”</span></p>
<p class="ai-optimize-87"><strong><span data-preserver-spaces="true">The age of paranoia kicks in</span></strong></p>
<p class="ai-optimize-88"><span data-preserver-spaces="true">Nicole Yelland, who works in public relations for a Detroit-based nonprofit, now conducts a multi-step background check whenever she receives a meeting request from someone she doesn’t know. Yelland runs the person’s information through Spokeo, a personal data aggregator. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t </span><span data-preserver-spaces="true">quite</span><span data-preserver-spaces="true"> seem right, she’ll ask the person to join a Microsoft Teams call— with their camera on.</span></p>
<p class="ai-optimize-89"><span data-preserver-spaces="true">If Yelland sounds paranoid, that’s because she is. </span><span data-preserver-spaces="true">In January, </span><span data-preserver-spaces="true">before she started her current nonprofit role,</span><span data-preserver-spaces="true"> Yelland says</span><span data-preserver-spaces="true">, </span><span data-preserver-spaces="true">she got roped into an elaborate scam targeting job seekers.</span><span data-preserver-spaces="true"> &#8220;Now, I do the whole verification rigmarole any time someone reaches out to me,” she said to WIRED.</span></p>
<p class="ai-optimize-90"><span data-preserver-spaces="true">In a time when remote work and distributed teams have become commonplace, professional communication channels are no longer safe, thanks to the GenAI-powered scams. The same AI tools that tech companies use to boost worker productivity </span><span data-preserver-spaces="true">are also making</span><span data-preserver-spaces="true"> it easier for criminals and fraudsters to construct fake personas in seconds.</span></p>
<p class="ai-optimize-91"><span data-preserver-spaces="true">Big Tech journalist Lauren Goode said, &#8220;On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment-related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.&#8221;</span></p>
<p class="ai-optimize-92"><span data-preserver-spaces="true">Yelland says the scammers who approached her in January 2025 were impersonating a real company</span><span data-preserver-spaces="true">, one</span><span data-preserver-spaces="true"> with a legitimate product.</span><span data-preserver-spaces="true"> The “hiring manager” she corresponded with over email also seemed legit, even sharing a slide deck outlining the responsibilities of the role they were advertising.</span></p>
<p class="ai-optimize-93"><span data-preserver-spaces="true">However, during the first video interview, Yelland says, the scammers refused to turn their cameras on during a Microsoft Teams meeting </span><span data-preserver-spaces="true">and made</span><span data-preserver-spaces="true"> unusual requests for detailed personal information, including her driver’s license number. Realising she’d been duped, Yelland slammed her laptop shut.</span></p>
<p class="ai-optimize-94"><span data-preserver-spaces="true">These schemes have forced AI players to work on technologies to detect other AI-enabled deepfakes, including GetReal Labs and Reality Defender. OpenAI CEO Sam Altman also runs an identity-verification startup called &#8220;Tools for Humanity,&#8221; which makes eye-scanning devices that capture a person’s biometric data, create a unique identifier for their identity, and store that information on the blockchain. The whole idea behind it is proving “personhood,” or that someone is a real human.</span></p>
<p class="ai-optimize-95"><span data-preserver-spaces="true">&#8220;A section of corporate professionals is also turning to old-fashioned social engineering techniques to verify every fishy-seeming interaction they have. Welcome to the age of paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words </span><span data-preserver-spaces="true">with each other</span><span data-preserver-spaces="true">, so they </span><span data-preserver-spaces="true">have a way to</span><span data-preserver-spaces="true"> ensure they’re not being misled if an encounter feels off,&#8221; Goode stated.</span></p>
<p class="ai-optimize-96"><span data-preserver-spaces="true">Daniel Goldman, a blockchain software engineer and former startup founder, said, &#8220;What’s funny is, the lo-fi approach works.&#8221;</span></p>
<p class="ai-optimize-97"><span data-preserver-spaces="true">Goldman began changing his </span><span data-preserver-spaces="true">own</span><span data-preserver-spaces="true"> professional behaviour after he heard</span><span data-preserver-spaces="true"> a prominent figure in the crypto world had been convincingly deepfaked on a video call.</span></p>
<p class="ai-optimize-98"><span data-preserver-spaces="true">He </span><span data-preserver-spaces="true">ended up warning</span><span data-preserver-spaces="true"> his close ones that even if they hear &#8220;his voice&#8221; or &#8220;see him&#8221; on a video call asking for money or an internet password, they should hang up and email him </span><span data-preserver-spaces="true">first</span><span data-preserver-spaces="true"> before doing anything.</span></p>
<p class="ai-optimize-99"><span data-preserver-spaces="true">Ken Schumacher, founder of the recruitment verification service Ropes, has worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their </span><span data-preserver-spaces="true">resume</span><span data-preserver-spaces="true">, such as their favourite coffee shops and places to hang out. Another verification tactic </span><span data-preserver-spaces="true">being used by people</span><span data-preserver-spaces="true"> is what Schumacher calls the “phone camera trick.”</span></p>
<p class="ai-optimize-100"><span data-preserver-spaces="true">Here, if someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.</span></p>
<p class="ai-optimize-101"><span data-preserver-spaces="true">However, it’s safe to say this approach can also be off-putting: Honest job candidates may be hesitant to show off the inside of their homes or offices, or worry a hiring manager is trying to learn details about their personal lives.</span></p>
<p class="ai-optimize-102"><span data-preserver-spaces="true">“Everyone is on edge and wary of each other now,” Schumacher says, and it perfectly sums up the mood change people are undergoing in the age of GenAI-powered scams.</span></p>
<p class="ai-optimize-103"><span data-preserver-spaces="true">As deepfakes </span><span data-preserver-spaces="true">grow</span><span data-preserver-spaces="true"> more advanced and accessible, AI-driven scams are reshaping cybercrime. Traditional security is no longer enough; vigilance, identity checks, and robust cybersecurity frameworks are the need of the hour </span><span data-preserver-spaces="true">to counter this rising threat</span><span data-preserver-spaces="true">.</span></p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/">Deepfake fallout: Welcome to the age of paranoia</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Protect your business from BEC scams</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/protect-your-business-from-bec-scams/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=protect-your-business-from-bec-scams</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/protect-your-business-from-bec-scams/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Tue, 25 Feb 2025 05:56:10 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[BEC Scams]]></category>
		<category><![CDATA[Business Email Compromise Scams]]></category>
		<category><![CDATA[email]]></category>
		<category><![CDATA[Impersonation]]></category>
		<category><![CDATA[payments]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[transactions]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=52433</guid>

					<description><![CDATA[<p>One of the primary tactics used in BEC scams is creating a false sense of urgency</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/protect-your-business-from-bec-scams/">Protect your business from BEC scams</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>According to the Federal Bureau of Investigation (FBI), Business Email Compromise (BEC) scams have cost businesses over $26 billion in the past few years. These scams are highly sophisticated and target employees at all levels, aiming to syphon money or sensitive information from companies. The impact of these scams is not limited to direct financial losses; they can also damage a company&#8217;s reputation, disrupt operations, and erode the trust between employees and management.</p>
<p>Understanding how to identify, prevent, and respond to these scams is essential for anyone who works in a business environment.</p>
<p>This article will take you through the methods scammers use, the psychology behind these scams, and, most importantly, how to spot a BEC scam before it compromises your business or personal information.</p>
<p><strong>The BEC scam</strong></p>
<p>A Business Email Compromise scam is a type of cyberattack in which a scammer gains access to or impersonates a trusted email account. The goal is to deceive someone in an organisation into performing actions such as transferring funds or disclosing sensitive information.</p>
<p>Unlike typical phishing emails, which may target anyone, BEC scams are highly targeted and often involve significant amounts of money. These attacks are not random but are instead the result of careful planning and research, where attackers gather detailed information about the company and its personnel.</p>
<p>BEC scams can take various forms, such as CEO fraud, account compromise, false invoice schemes, attorney impersonation, and data theft. These scams rely heavily on psychological manipulation. Unlike many other cybercrimes, BEC scams do not usually rely on malware or other technical exploits.</p>
<p>Instead, they use social engineering techniques to trick individuals into performing actions they believe are legitimate. They mimic trusted relationships, use a sense of urgency to force immediate action, and exploit hierarchical authority, making recipients less likely to question requests from superiors.</p>
<p>The impersonation techniques used in BEC scams are often extremely convincing. Attackers may spoof email addresses, create fake websites, and even use language that mirrors the company culture. They use public sources like social media and company websites to understand the roles and responsibilities of key personnel, allowing them to craft highly tailored attacks that seem plausible. The careful attention to detail is what makes these scams effective and so difficult to spot.</p>
<p>One of the primary tactics used in BEC scams is creating a false sense of urgency. This approach exploits a natural human reaction: the tendency to comply quickly when under pressure. A BEC scam email often appears to come from someone in a position of authority, such as a CEO or a director, and demands immediate action, such as transferring funds or sharing sensitive information.</p>
<p>Ronnie Tokazowski, a well-known security researcher, notes that scammers rely on creating a deregulated emotional state, which makes it difficult for the victim to think critically. When a person feels pressured or stressed, they are more likely to bypass their usual cautious behaviour, which is exactly what scammers count on.</p>
<p><strong>Beware of isolation tactics</strong></p>
<p>Scammers also employ social engineering techniques that isolate you from colleagues. They may include phrases such as, “Keep this between us” or “This is confidential.” These phrases are designed to prevent you from seeking a second opinion. If an email urges you to keep something secret, that’s a red flag. The isolation tactic is used to make the victim feel that they are handling a sensitive matter and that involving others could be detrimental or embarrassing.</p>
<p>Isolation is a powerful tool because it reduces the chances of the victim cross-checking information, which could expose the scam. In a busy work environment, employees might not want to bother their superior or colleague with questions, especially if the email makes it seem like they should know what to do. By making the recipient feel like they are part of an exclusive communication, scammers manipulate them into complying without verification.</p>
<p>Even if an email seems urgent, you should always verify its authenticity using a separate communication channel. This might mean calling the person who supposedly sent the email or sending them a message on a verified internal communication tool like Slack or Microsoft Teams. Do not rely on the contact information provided in the email itself, as scammers often include phone numbers that they control. Verification might feel like a hassle in a fast-paced work environment, but it is a critical step that can prevent costly mistakes.</p>
<p>Always use contact information that you know to be genuine. If an email claims to be from your company&#8217;s CEO asking for a wire transfer, take a moment to call the CEO&#8217;s assistant or use a known phone number to confirm. The extra step of making a phone call or sending a message can mean the difference between falling for a scam and preventing one. Be especially suspicious if the email contains warnings not to verify the request with others or to keep it confidential.</p>
<p>Another effective way to spot a BEC scam is to carefully check the email address from which the request was sent. Scammers often use email addresses that look almost identical to legitimate ones. Look for subtle changes like a single letter or number. Also, check the domain to ensure it is correct and try clicking “Reply” to see if the email address in the “To” field changes to something different. These small details can often reveal a scam attempt.</p>
<p>Additionally, attackers sometimes register domains that are visually similar to legitimate ones. For example, they may replace an &#8220;m&#8221; with &#8220;rn&#8221; or use a domain ending like &#8220;.co&#8221; instead of &#8220;.com&#8221;. These slight modifications are designed to go unnoticed by busy employees who may be skimming through their emails. Carefully inspecting the domain can prevent these look-alike domains from fooling you.</p>
<p><strong>Follow proper verification protocols</strong></p>
<p>One of the most effective ways to protect yourself and your organisation from BEC scams is to follow established protocols for authorising payments and sharing sensitive information. Organisations should have standard procedures for making payments, and sensitive transactions should require multiple levels of approval. If you receive an email asking you to bypass these procedures, it should raise suspicion.</p>
<p>Proper protocols are designed to prevent exactly this type of fraudulent activity. Even when requests come from high-ranking officials, employees should follow verification procedures without exception. Hierarchical authority is often exploited in BEC scams, with attackers pretending to be someone with enough power to push people into bypassing standard safety measures. To combat this, companies need to establish clear guidelines that payments or sensitive actions cannot be authorised based on a single email.</p>
<p>In addition to manual verification, there are several technical measures you can use to check the legitimacy of an email. Inspecting email headers can provide clues as to whether an email is genuine. Headers contain metadata about the email, such as the servers it passed through. If an email that claims to be internal has headers showing that it originated from an external server, this is a major red flag.</p>
<p>Many organisations employ anti-phishing software that can identify and block BEC attempts. Employees should be aware of the tools available to them and should not hesitate to use them when in doubt. Companies can also use DMARC (Domain-based Message Authentication, Reporting, and Conformance), SPF (Sender Policy Framework), and DKIM (DomainKeys Identified Mail) to verify that emails sent from their domains are legitimate. These tools authenticate the source of emails and can help prevent spoofed emails from reaching employees&#8217; inboxes.</p>
<p><strong>Open communication culture</strong></p>
<p>Regular training can help employees recognise potential scams before they cause harm. One of the best ways to train employees is through simulated phishing attacks. By simulating what a BEC scam might look like, employees can learn in a safe environment what red flags to look for. These exercises help employees understand the evolving tactics used by attackers and make them more cautious when handling suspicious emails.</p>
<p>Cyber threats evolve, and so should your employees&#8217; knowledge. Interactive workshops, newsletters with examples of recent scams, and mandatory e-learning modules are all effective ways to keep security awareness fresh in employees&#8217; minds. The goal is to cultivate an instinctive scepticism towards unsolicited requests.</p>
<p>A culture of open communication can also significantly reduce the chances of a successful BEC scam. Employees should feel comfortable reaching out if they suspect something is wrong.</p>
<p>Ronnie Tokazowski suggests that skip-level meetings—where a senior leader meets with a junior employee without their direct manager—can help strengthen communication between employees and management. Companies should also ensure there are no repercussions for reporting suspicions, even if they turn out to be false alarms.</p>
<p>In an open communication culture, employees are more likely to verify unusual requests, even if they come from higher-ups. When employees fear repercussions or judgement, they are more inclined to comply without question. Encouraging employees to seek clarification and rewarding vigilance helps in creating an environment where questioning is valued as a security measure rather than frowned upon.</p>
<p><strong>Security measures</strong></p>
<p>Executives and other leaders need to be aware that their behaviour can either mitigate or exacerbate the risk of BEC scams. Leaders should avoid making unusual requests, especially via email, which makes it easier for scammers to impersonate them convincingly. Whenever possible, executives should stick to official channels and established procedures.</p>
<p>Implementing Multi-Factor Authentication (MFA) for email accounts can prevent scammers from gaining access even if they manage to obtain someone&#8217;s password. MFA adds an extra layer of security by requiring a second form of verification, such as a code sent to a phone. This additional layer makes it significantly harder for attackers to compromise accounts and impersonate executives.</p>
<p>Leaders should also be transparent about any scams that affect the company. This can reduce the stigma of falling for scams and encourage employees to be vigilant in the future. Setting up a payment verification process, such as requiring two sign-offs for all payments above a certain threshold, can prevent unauthorised transactions. Watching for red flags in email content, such as grammar and spelling errors, unusual formatting, or generic language, can also help in identifying scams.</p>
<p>It is also essential for leaders to model good security behaviours. If employees see that their leaders are vigilant—always verifying requests, following protocols, and using secure communication channels—they will be more likely to emulate these behaviours. Leadership plays a pivotal role in establishing a strong culture of cybersecurity, and their actions can set the tone for the entire organisation.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/protect-your-business-from-bec-scams/">Protect your business from BEC scams</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/protect-your-business-from-bec-scams/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IF Insights: Australia&#8217;s big fight against scams</title>
		<link>https://internationalfinance.com/banking/if-insights-australias-big-fight-against-scams/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=if-insights-australias-big-fight-against-scams</link>
					<comments>https://internationalfinance.com/banking/if-insights-australias-big-fight-against-scams/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 21 Nov 2024 10:33:56 +0000</pubDate>
				<category><![CDATA[Banking]]></category>
		<category><![CDATA[Featured]]></category>
		<category><![CDATA[australia]]></category>
		<category><![CDATA[banking]]></category>
		<category><![CDATA[ePayments]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[HSBC]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[telecommunications]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=51430</guid>

					<description><![CDATA[<p>Australians lost a record USD 3.1 billion to scams in 2022, according to the Australian Competition and Consumer Commission</p>
<p>The post <a href="https://internationalfinance.com/banking/if-insights-australias-big-fight-against-scams/">IF Insights: Australia&#8217;s big fight against scams</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In a country that has become all too familiar with the rising tide of <a href="https://internationalfinance.com/magazine/banking-and-finance-magazine/staying-digitally-safe-from-banking-scams/"><strong>scams</strong></a>, Australia&#8217;s financial landscape is witnessing a significant shift.</p>
<p>Traditionally, scam victims have been left to foot the bill for their losses, while banks have offered little in terms of effective prevention or restitution.</p>
<p>However, a recent decision by the Australian Financial Conduct Authority (AFCA) has sparked hope for a more consumer-centric approach. This ruling has the potential to change how scams are handled across Australia’s banking industry, shifting responsibility from individuals to the institutions that should be safeguarding their financial well-being.</p>
<p><strong>A History Of Unequal Burden</strong></p>
<p>Australians lost a record USD 3.1 billion to scams in 2022, according to the Australian Competition and Consumer Commission (ACCC). This alarming figure marks a nearly 80% increase from the previous year, underscoring the accelerating sophistication of scams targeting individuals.</p>
<p>Traditionally, Australian banks have not fully shouldered the burden of these losses. The Australian Securities and Investments Commission (ASIC), in its 2023 review, found that while banks detected and halted a small proportion of fraudulent transactions, the total compensation paid to scam victims was a drop in the bucket compared to the overall losses.</p>
<p>This disparity is due in part to the voluntary nature of the ePayments Code, which many banks rely upon to avoid compensating customers who fall victim to scams. Under this code, banks are not obligated to provide restitution if the customer has disclosed their passcodes, even if under deceptive circumstances. This loophole has left many scam victims without recourse, prompting significant criticism and calls for reform.</p>
<p><strong>A Turning Point Arrives</strong></p>
<p>In November 2024, the AFCA&#8217;s decision to order HSBC to compensate a customer who lost more than USD 47,000 through a sophisticated bank impersonation or “spoofing” scam was a game-changer. In this case, the scammer contacted the victim, Mr. T, with a fraudulent text that appeared in a thread of legitimate messages from HSBC, making the scam appear credible.</p>
<p>The scammer also possessed sensitive information that Mr. T believed only the bank would have access to, leading him to reveal his online banking passcodes. This allowed the scammer to make an unauthorised transfer of USD 47,178.54.</p>
<p>HSBC argued that, under the ePayments Code, compensation should be ruled out because Mr. T had disclosed his passcodes voluntarily. However, AFCA disagreed, highlighting that Mr. T had been manipulated under duress and did not “voluntarily” disclose his information.</p>
<p>The ruling stated that the scam had employed psychological pressure and urgency, effectively coercing Mr. T into sharing his credentials. AFCA awarded compensation covering the majority of the stolen funds, lost interest charges, legal costs, and USD 1,000 for poor customer service by HSBC during the claims process.</p>
<p>This determination is significant because AFCA decisions are binding on financial institutions, and HSBC has no direct right of appeal. This not only provides restitution to Mr. T but also sets a precedent that may prompt broader shifts in how scam compensation claims are handled across the banking sector.</p>
<p><strong>Need For Broader Reforms</strong></p>
<p>The HSBC ruling comes at a crucial time, amid growing calls for reform that would make banks more responsible for scams that their customers face. Many scams, such as “push payment” frauds, where scammers trick victims into sending payments directly, fall outside the scope of the ePayments Code, as they involve the customer initiating the transaction. This means there is often no existing framework obligating banks to compensate victims, even if the customer has been deceived into transferring money to a scammer&#8217;s account.</p>
<p>A key aspect of AFCA&#8217;s jurisdiction is that its determinations are based on what is considered “fair in all the circumstances”, rather than strictly adhering to narrow legal codes. This gives AFCA the latitude to consider broader principles such as good industry practice and the need for banks to act proactively in scam prevention.</p>
<p>In determining whether compensation is warranted, AFCA takes into account the complexity of the scam, the bank&#8217;s efforts to warn or protect the customer, and whether the bank acted quickly and effectively when the scam was discovered.</p>
<p>According to the AFCA Ombudsman, David Locke, the ruling reflects the need for financial institutions to improve their vigilance against scams, especially as these frauds become increasingly sophisticated and difficult for ordinary consumers to detect.</p>
<p>“We are seeing scams that even well-informed and cautious individuals can fall prey to,” Locke said in a recent interview. This reflects a broader recognition that detecting these scams is often beyond the capability of individual customers, necessitating greater bank accountability.</p>
<p>In light of these systemic issues, the Australian banking sector has committed to several key reforms. In 2023, the Australian Banking Association (ABA) launched the “Scam-Safe Accord”, a sector-wide initiative designed to protect customers better.</p>
<p>The Scam-Safe Accord includes several measures aimed at detecting and preventing scams before they occur. Among these measures are the introduction of confirmation of payee service to ensure that account details match the intended recipient, delays for first-time payments, and the use of biometric identity checks for account verification.</p>
<p>Moreover, the Australian government is considering the “Scams Prevention Framework” legislation, which aims to impose even stricter requirements on banks, telecommunications companies, and digital platforms. Under this proposed framework, these entities would be required to take reasonable steps to prevent, detect, report, disrupt, and respond to scams.</p>
<p>This approach, drawing inspiration from similar frameworks introduced in the United Kingdom, represents an ambitious push towards collective accountability. In the UK, new rules mandate that both paying and receiving banks share responsibility for scam compensation, up to 85,000 pound (approximately AUD 165,136), unless the customer was grossly negligent. <a href="https://internationalfinance.com/economy/australias-treasurer-says-china-stimulus-could-boost-growth-down-under/"><strong>Australia’s</strong></a> reforms are expected to have similar stipulations, potentially leading to increased protections for customers who fall victim to fraud.</p>
<p>Financial institutions are not the only entities under scrutiny. The Australian Communications and Media Authority (ACMA) and consumer advocacy groups have pointed out that many scams are facilitated via digital platforms and social media, with messaging services and fake advertisements being prominent vehicles for scam activity.</p>
<p>The proposed Scams Prevention Framework would also require digital platforms and telecommunications companies to be more proactive in curbing scam proliferation.</p>
<p>According to a 2023 report by the Australian Institute of Criminology, around 70% of scam victims first encountered scammers via online channels, including social media and SMS. Given this, the role of digital platforms in addressing scams cannot be overlooked.</p>
<p>Reforms are expected to introduce more stringent obligations for tech companies, similar to the Online Safety Act, which mandates that platforms take rapid action against harmful content.</p>
<p><strong>Implications For Consumers And Banks</strong></p>
<p>The AFCA ruling against HSBC represents a major step towards acknowledging the power imbalance between customers and the increasingly sophisticated networks of scammers targeting them. For Australian consumers, this may signal the beginning of a new era where banks take more active responsibility for securing customers&#8217; accounts, even in cases of customer error under duress.</p>
<p>However, experts caution that there is a long road ahead. Broadening the coverage of the ePayments Code and enacting the Scams Prevention Framework legislation will be key milestones in shifting the balance of responsibility from victims to institutions better positioned to detect and stop fraudulent activity.</p>
<p>According to Karen Cox, CEO of the Financial Rights Legal Centre, “These changes are a good start, but we need mandatory codes of conduct across the entire financial services industry to genuinely protect consumers. Until then, banks need to do more than just tell customers to &#8216;be careful&#8217;.&#8221;</p>
<p>For banks, the ruling sets a precedent that could have financial and reputational impacts if similar compensation claims increase. Banks will need to invest more in fraud detection technology and customer education initiatives. This may include improving customer support during incidents and enhancing real-time scam detection mechanisms, which could reduce both the occurrence of scams and the need for post-fraud compensation.</p>
<p>Addressing the complex issue of scams is not just a matter of caution on the part of consumers—it&#8217;s about fundamentally rethinking the responsibilities of financial institutions, technology platforms, and regulators in safeguarding people’s hard-earned money.</p>
<p>The post <a href="https://internationalfinance.com/banking/if-insights-australias-big-fight-against-scams/">IF Insights: Australia&#8217;s big fight against scams</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/banking/if-insights-australias-big-fight-against-scams/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>FBI issues warning regarding work from home scam</title>
		<link>https://internationalfinance.com/technology/fbi-issues-warning-regarding-work-from-home-scam/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=fbi-issues-warning-regarding-work-from-home-scam</link>
					<comments>https://internationalfinance.com/technology/fbi-issues-warning-regarding-work-from-home-scam/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Wed, 12 Jun 2024 06:20:39 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[cryptocurrency]]></category>
		<category><![CDATA[Fake Jobs]]></category>
		<category><![CDATA[FBI]]></category>
		<category><![CDATA[Federal Bureau Of Investigation]]></category>
		<category><![CDATA[jobs]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[Scam]]></category>
		<category><![CDATA[Scammers]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=50134</guid>

					<description><![CDATA[<p>It is always advisable to report any financial or personally identifiable information to the FBI IC3 rather than sending it to individuals who are making unsolicited job offers</p>
<p>The post <a href="https://internationalfinance.com/technology/fbi-issues-warning-regarding-work-from-home-scam/">FBI issues warning regarding work from home scam</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The United States Federal Bureau of Investigation (<a href="https://internationalfinance.com/currency/crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action/"><strong>FBI</strong></a>) has issued a warning about scammers who deceive victims into paying cryptocurrency by pretending to be remote job providers.</p>
<p>As per the warning, scammers are making cold calls and emails, offering people fake jobs which usually sound too good to be true. These remote jobs typically require workers to &#8220;optimise&#8221; a service by continuously clicking on a button, or they may involve simple tasks like rating restaurants. Employees can perform these jobs from home.</p>
<p>Scammers will either pose as a fictitious recruiting agency or will mimic a well-known one. When the victim is meant to receive payment, the scam really begins.</p>
<p>They are invited to sign up for a platform that allows them to track and monitor their pay, but in order to &#8220;unlock&#8221; the service, they must pay a small amount of cryptocurrency. The money is lost forever after they make the payment.</p>
<p><strong>Fake Platforms</strong></p>
<p>The platform appears to be &#8220;working&#8221; on the surface, which exacerbates the situation. Victims have the ability to &#8220;track&#8221; their payments and even view their revenue streams. But since everything is a scam and the money is fake, they will never be able to take any of it back.</p>
<p>In order to safeguard themselves, the FBI advises citizens to be wary of unsolicited job offer messages and to refrain from opening attachments, downloading files, or clicking on links in these messages.</p>
<p>The FBI cautions, &#8220;Never send money to an alleged employer,&#8221; and advises consumers not to pay for any services that promise to help them get their money back from lost <a href="https://internationalfinance.com/currency/six-reasons-why-you-should-invest-cryptocurrency/"><strong>cryptocurrency</strong></a> investments.</p>
<p>Finally, it is always advisable to report any financial or personally identifiable information to the FBI IC3 rather than sending it to individuals who are making unsolicited job offers.</p>
<p>In the world of cybercrime, fake jobs are nothing new. In actuality, the notorious North Korean state-sponsored threat actor Lazarus Group helped to popularise them over time.</p>
<p>The post <a href="https://internationalfinance.com/technology/fbi-issues-warning-regarding-work-from-home-scam/">FBI issues warning regarding work from home scam</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/fbi-issues-warning-regarding-work-from-home-scam/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Crypto scams cost more than ransomware, says FBI as US initiates civil forfeiture action</title>
		<link>https://internationalfinance.com/currency/crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action</link>
					<comments>https://internationalfinance.com/currency/crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Fri, 05 Apr 2024 00:30:30 +0000</pubDate>
				<category><![CDATA[Currency]]></category>
		<category><![CDATA[Featured]]></category>
		<category><![CDATA[Binance]]></category>
		<category><![CDATA[Coingape]]></category>
		<category><![CDATA[Crypto Scam]]></category>
		<category><![CDATA[cryptocurrency]]></category>
		<category><![CDATA[FBI]]></category>
		<category><![CDATA[Massachusetts]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[ransomware]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=49684</guid>

					<description><![CDATA[<p>Reports have emerged about the US Attorney’s Office in Massachusetts initiating a civil forfeiture action aimed at recouping USD 2.3 million in cryptocurrency</p>
<p>The post <a href="https://internationalfinance.com/currency/crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action/">Crypto scams cost more than ransomware, says FBI as US initiates civil forfeiture action</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>According to the United States-based Federal Bureau of Investigation (FBI), hackers are making more money through confidence and romance scams than they are through <a href="https://internationalfinance.com/technology/cybersecurity-company-dragos-failed-ransomware-attack-public/"><strong>ransomware</strong></a> attacks. However, the information from the law enforcement agency appears to be a little skewed.</p>
<p>According to the FBI, in 2023, people fell victim to multiple social engineering scams that resulted in the theft of USD 4.57 billion in cryptocurrency. Comparing that amount to the USD 3.31 billion that was stolen the year prior, there has been a 38% increase.</p>
<p><strong>Ransomware In The Millions</strong></p>
<p>The majority of the time, the scammers would pose as gorgeous women and have weeks-long chats with their victims. They used to advocate for group cryptocurrency investments or something similar. They would also recommend an app or <a href="https://internationalfinance.com/currency/cryptocurrency-one-stop-solution-africas-banking-problems/"><strong>cryptocurrency</strong></a> platform, which is typically phony and run by aggressors.</p>
<p>The scammers would attempt to continue the scam for as long as possible, by convincing the victims to &#8220;invest&#8221; as much as they can and even fabricating &#8220;gains&#8221; or money earned. Until they attempt to take their money out at some point.</p>
<p>At that point, the scammers move on to phase two, posing as the app&#8217;s customer service representatives and demanding payment of a &#8220;fee&#8221; for the victim to be able to withdraw their money. The more desperate they will try, the more they will lose.</p>
<p>Attacks with ransomware made a &#8220;miniscule&#8221; USD 59.6 million, in contrast to romance scams. Although it only includes ransomware incidents that were reported to the Internet Crime Complaint Centre (IC3), the FBI acknowledges that this information may not be the most accurate representation of the current state of ransomware. It also excludes the cost of business downtime.</p>
<p>&#8220;Regardless of whether you or your organisation decided to pay the ransom, the FBI urges you to report ransomware incidents to the IC3. Doing so provides investigators with the critical information they need to track ransomware attackers, hold them accountable under US law, and prevent future attacks,&#8221; the FBI concluded.</p>
<p><strong>Biden Administration Acts Tough</strong></p>
<p>Meanwhile, reports have emerged about the US Attorney’s Office in Massachusetts initiating a civil forfeiture action aimed at recouping USD 2.3 million (lost by 37 victims all over the country through online scams) in cryptocurrency. The measure, in a consolidated approach by federal agencies, is aimed at dealing with and reducing the impacts caused by internet scams, especially the ones involving digital currencies.</p>
<p>&#8220;The forfeiture seeks to recover a mix of digital currencies, including USD Coin (USDC), Tether (USDT), Tron (TRX), Solana (SOL), Binance Coin (BNB), Cardano (ADA), and Ether (ETH), held in two Binance accounts. These assets were identified and frozen in January 2024 after an extended investigation into a “Pig Butchering” fraud that resulted in the loss of USD 400,000 to a Massachusetts resident,&#8221; CoinGape reported.</p>
<p>In “Pig Butchering,” fraudsters undergo a stage of trust building with their victims, before persuading them to invest in nonexistent opportunities that result in financial losses.</p>
<p>&#8220;The action of the US Department of Justice was facilitated by a larger investigation into fraudulent schemes that prey on people through intricate online systems. These endeavours indicate the growing partnership of cryptocurrency platforms such as Binance and law enforcers in tracking and retrieving assets involved in criminal activities. The process not only consists of tracing the illegally gained funds but also finding the legal ways to send them back to their legitimate owners, illustrating how difficult and complex it is to control the digital financial environment,&#8221; CoinGape stated further.</p>
<p>&#8220;Concurrently, Tether was instrumental in helping the Department of Justice and the Federal Bureau of Investigation with the recovery and return of about USD 1.4 million in Tether (USDT) tokens. These were attached to a tech support scam most prevalent among old people, therefore showing the wider range of online frauds outside the cryptocurrency sector,&#8221; the media house observed.</p>
<p>The post <a href="https://internationalfinance.com/currency/crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action/">Crypto scams cost more than ransomware, says FBI as US initiates civil forfeiture action</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/currency/crypto-scams-cost-more-ransomware-says-fbi-us-initiates-civil-forfeiture-action/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Twitter&#8217;s cybercrime mess</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/twitters-cybercrime-mess/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=twitters-cybercrime-mess</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/twitters-cybercrime-mess/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 19 Oct 2023 00:47:36 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Blogging]]></category>
		<category><![CDATA[bots]]></category>
		<category><![CDATA[cryptocurrency]]></category>
		<category><![CDATA[cybercrime]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[email]]></category>
		<category><![CDATA[hacking]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Tweets]]></category>
		<category><![CDATA[Twitter]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=48291</guid>

					<description><![CDATA[<p>According to the 2023 Axios Harris reputation rankings, Twitter under Elon Musk is the fourth-most-despised brand in the United States</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/twitters-cybercrime-mess/">Twitter&#8217;s cybercrime mess</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The abrupt resignation of Twitter officials in charge of brand safety and content moderation, after Elon Musk’s takeover of the micro-blogging platform in October 2022, has made the portal more open to hate speech and cybercrime than before.</p>
<p>Ella Irwin, the vice president of trust and safety at Twitter, left the organization. A.J. Brown, the organization&#8217;s head of brand safety and ad quality, and Maie Aiyed, a program manager who handled brand-safety relationships, reportedly resigned after Irwin left.</p>
<p>It has been close to a year since Elon Musk completed the $44 billion acquisition of Twitter, an investment which has so far proven to be a colossal loss for the maverick tech billionaire. He has significantly downsized the company&#8217;s employees and reversed content distribution-related restrictions. As a result, several companies stopped or reduced their advertising expenditures.</p>
<p>According to the 2023 Axios Harris reputation rankings, Twitter under Elon Musk is the fourth-most-despised brand in the United States. And the scepticism around his ownership of Twitter keeps growing.</p>
<p>Since Elon Musk took control, phishing attempts against Twitter (now rebranded as X) have increased. The changes to the ‘Twitter Blue Premium Verification&#8217; service have given threat actors a pretext to steal users&#8217; login information.</p>
<p>Researchers at cybersecurity vendor Proofpoint have noticed an upsurge in Twitter-related phishing attacks. According to the Proofpoint team, numerous advertisements have employed enticements relating to Twitter verification or the new Twitter Blue offering, such as &#8220;Twitter Blue Badge Billing Statement Available.&#8221;</p>
<p>After taking over the company, Elon Musk added an $8 monthly fee for the ‘Twitter Blue’ service. He has guaranteed that tweets from verified users will be prioritized on Twitter feeds. Users who paid were verified with the website&#8217;s well-known blue tick. The plan has been suspended, nevertheless, due to several spoof account issues.</p>
<p><strong>Twitter and phishing attempts</strong></p>
<p>Twitter phishing attempts use URLs that redirect to criminal infrastructure in addition to Google Forms for data harvesting. Vice President of threat research and Detection Sherrod DeGrippo stated, “These initiatives typically target members of the media and the entertainment industry, including journalists and Twitter users who have the appearance of being verified. Frequently, the email address is the same as the Twitter handle used, or it may be found in the user&#8217;s Twitter bio.”</p>
<p>&#8220;While we have occasionally seen Twitter credential phishing employing lures linked to verification from cybercrime threat actors in the past, the activity has picked up recently,&#8221; the official added further.</p>
<p>In the past, TA482, a hacker gang, has frequently used Twitter-related phishing to target media users. But research published in July 2023 by Check Point Research claimed that delivery service DHL is the most impersonated company for phishing scams, followed by Microsoft and LinkedIn. When it comes to the most-targeted brands for these kinds of attacks, Twitter (rebranded as X) does not even make the top ten.</p>
<p>DeGrippo stated further, &#8220;To maximize the possibility that a user would interact with social engineering content, cybercriminal threat actors frequently exploit themes connected to important news stories and relevant to people&#8217;s interests.”</p>
<p>Even if Twitter and the social media platform are currently quite active, acquiring access to accounts is still profitable. Twitter accounts that are legitimately verified typically have larger audiences than the average user, and compromised accounts can be used to spread false information, persuade users to interact with additional malicious content like fraudulent cryptocurrency scams and expand phishing campaigns to other users. </p>
<p>De Grippo warned that &#8220;pig butchering&#8221; fraud, or attacks that start on social media networks before moving on to other services with the ultimate goal of obtaining cryptocurrency, might be launched via Twitter phishing. This kind of activity has increased lately, according to Proofpoint.</p>
<p><strong>Cybercrime on Twitter post takeover</strong></p>
<p>Impersonation of well-known firms has plagued the new authentication system Elon Musk created. Following fake tweets sent by spoof accounts using the names of their respective companies, Eli Lilly and Lockheed Martin suffered a decline in their share prices.</p>
<p>With the ransomware gang Yanluowang joining X in July 2023 to sell their wares, concerns have been raised that the network will be used by hackers to sell stolen data due to the billionaire Tesla&#8217;s devotion to free speech.</p>
<p>He cut down the number of employees responsible for X’s safety and content moderation before the most recent high-profile departures from the concerned department took place. He fired the whole artificial intelligence ethics team, which was in charge of making sure that consumers weren&#8217;t pushed harmful information by algorithms.</p>
<p>The billionaire recently downplayed worries about the prevalence of hate speech on Twitter. During a Wall Street Journal event, he asserted that hate speech on the site has decreased since he took over the firm in October 2022 and that Twitter has reduced &#8220;spam, frauds, and bots&#8221; by &#8220;at least 90%.&#8221;</p>
<p>There is no data to back up those assertions, experts, and ad industry insiders told CNBC. Some even claim that Twitter is purposefully obstructing independent researchers from tracking these numbers.</p>
<p><strong>Ponzi schemes</strong></p>
<p>X is among the most well-known social networks in the world. Naturally, it is also a sanctuary for scammers of all stripes and cybercriminals.</p>
<p>It&#8217;s important to familiarize yourself with common Twitter scams and how they operate, the risk quotient and how to successfully defend yourself against them.</p>
<p>There are many Ponzi schemes out there such as phishing, account hacking scams, conversation frauds, bitcoin scams, and bot scams. </p>
<p>Phishing, a sort of cyberattack in which a threat actor impersonates someone or something they are not, can affect any social media network. With Twitter (rebranded as X), a con artist has virtually endless opportunities to phish users. To provoke the target into entering their credentials, they can use email phishing, by sending false messages.</p>
<p>In November 2022, not long after seizing control of Twitter, Elon Musk unveiled ‘Twitter Blue’, a monthly subscription service that costs money and adds a blue checkmark to a user&#8217;s account.</p>
<p>According to a study by Bleeping Computer, con artists promptly took note of this attempt and launched a sophisticated phishing assault to steal the usernames and passwords of users who wanted to confirm their accounts.</p>
<p>Since Twitter&#8217;s creation, similar phishing campaigns have plagued the social media platform, with fraudsters coming up with ever-creative ways to steal user credentials. The best thing a user can do is to set up two-factor verification and carefully examine each email that purports to be from Twitter because this won&#8217;t change regardless of who is in charge of the social network.</p>
<p>X&#8217;s security and user experience have deteriorated under Elon Musk&#8217;s ownership, becoming increasingly perilous for users. In a recent story published by Wired.com, Tim Utzig, a visually impaired individual was deceived by scammers on the micro-blogging platform. Tim, relying on a screen reader, couldn&#8217;t detect the scam indicators when responding to a tweet from a compromised account. He lost $1,000 in the process.</p>
<p>The author, concerned by the social media portal&#8217;s lack of responsiveness, teamed up with a social engineering expert named Steve to track down the scammers. The efforts revealed a network of fraudsters using elaborate methods, exploiting vulnerabilities, and leveraging blockchain transactions to deceive victims. Multiple individuals were identified through their payment accounts, linked to real-world addresses, underscoring the scope of the scam.</p>
<p>This story illuminates several critical issues with X. The rise in fraudulent activities on the platform, exemplified by Tim&#8217;s case, indicates a worrisome lack of effective security measures. The decline in accessibility support for visually impaired users further compounds the problem, leaving vulnerable individuals like Tim susceptible to exploitation.</p>
<p>The narrative also raises concerns about Twitter&#8217;s changing priorities, as evidenced by its rebranding to &#8220;X&#8221; and ambitious plans to become an &#8220;everything app.&#8221; This pivot, while aiming to expand the platform&#8217;s capabilities, poses significant security risks given the existing vulnerabilities that scammers exploit. The story serves as a cautionary tale, emphasizing the need for users to be vigilant and the urgent necessity for Twitter to prioritize both accessibility and security to prevent further harm to its user base.</p>
<p>Then there are account hacking scams. The blue checkmark on Twitter has always been reserved for the most eminent people, including celebrities, politicians, and influencers. On the other hand, cybercriminals have always coveted the social evidence that comes with obtaining a blue check. They routinely hack verified accounts to get one.</p>
<p>For instance, a 17-year-old teenager hacked the Twitter accounts of Joe Biden, the then-presidential contender, and Bill Gates, the co-founder of Microsoft, in 2020 using a straightforward social engineering technique. The adolescent received a three-year prison sentence after his actions, but they demonstrate how simple it is for cybercriminals to hack verified Twitter accounts, according to The Guardian.</p>
<p>It&#8217;s easy to suppose that many people fell for the young boy&#8217;s con after he hacked into Biden and Gates&#8217; accounts to demand a Bitcoin payment. However, this was not an isolated incident; breaches occur much too regularly, and most often, regular users are the ones who suffer. This is why it&#8217;s crucial to keep in mind that you shouldn&#8217;t ever blindly believe what you see on Twitter. Even if it seems like your favourite celebrity is truly tweeting, make sure to confirm that their message is authentic before taking any action.</p>
<p>Conversion frauds are also tricky. Cybercriminals are developing more inventive ways to con consumers because everyone wants a blue checkmark. Whether you use Facebook, Twitter, or Instagram, you&#8217;ve received a message from someone promising to quickly verify your account.</p>
<p>There are only two ways to have a verified Twitter account in practice. One is a holdover from the first approach, namely making a formal verification request through the platform. There were several requirements you had to meet to receive the blue badge. Most importantly, you had to demonstrate that you are a &#8220;notable&#8221; person involved in politics, the media, or other fields. This is no longer functional, although those who previously had verified accounts may still appreciate the blue tick icon.</p>
<p>There is currently just one method to get the tiny blue checkmark, which is to join up for ‘Twitter Blue’ if you still want one.</p>
<p>Additionally, be sure to report any con artists who offer to verify your account to Twitter. Visit X&#8217;s support page and complete the necessary form there to accomplish this.</p>
<p>In the cryptocurrency industry, scams are all too rampant, and many of them take place on Twitter. You have probably encountered one if you follow cryptocurrency-related accounts or occasionally post about cryptocurrencies.</p>
<p>Twitter cryptocurrency scams come in a variety of forms, some of which are glaringly evident while others are more subtle. One way con artists do this is by pretending to be a well-known digital currency influencer or analyst, posting false tweets, or even sending direct messages to their intended victims. Their tweets may promote worthless cryptocurrencies that will eventually lose value or advertise phoney airdrops and dubious services.</p>
<p>Another scammer favourite is fake cryptocurrency giveaways. This kind of hoax relies on persuading the victim that they would receive a huge reward in exchange for a tiny cryptocurrency deposit to pay a &#8220;fee&#8221; or something comparable. Of course, the fraudster will just take your money and move on to the next victim if you make the mistake of depositing it.</p>
<p>Make sure you thoroughly research any information regarding a specific asset and only trade on reputable cryptocurrency exchanges if you want to avoid falling victim to crypto-related scams on Twitter.</p>
<p>Then there are bot scams. As you may already be aware, social media sites are crawling with bots—computer programs that mimic human activity. Twitter is no different. A 2022 study from the online analytics firm Similarweb discovered that 5% of Twitter users are bots and that they produce between 21% and 29% of the network&#8217;s content.</p>
<p>Although bots are not inherently evil, con artists frequently use them to disseminate false and misleading information, encourage victims to click on harmful links, install malware, and carry out other harmful activities. On Twitter, networks of bots may work together to retweet and like posts to reach a larger audience.</p>
<p>You should always carefully examine any account that sounds suspicious, especially if it frequently spams links in responses to other tweets or sends direct messages, as some Twitter bots can be challenging to recognize and initially resemble real accounts. Block or mute the account in question, and then report it to the micro-blogging platform if you believe it to be a harmful bot.</p>
<p>The recent developments surrounding Twitter, including the departure of key officials responsible for brand safety and content moderation, have raised concerns about the platform&#8217;s susceptibility to hate speech and cybercrime. The abrupt resignation of prominent figures like Ella Irwin, A.J. Brown, and Maie Aiyed has had an impact on the platform&#8217;s ability to maintain a safe and controlled online environment.</p>
<p>Since Elon Musk acquired Twitter and his subsequent changes, there have been notable shifts in the platform&#8217;s policies and practices. These changes have led to decreased content restrictions and alterations to the premium verification service, which has been exploited by cybercriminals for phishing attempts. These phishing attacks use various tactics, including false email messages and Google Forms, to trick users into revealing their login credentials.</p>
<p>Additionally, Elon Musk&#8217;s takeover seems to have made Twitter a more attractive target for hackers, increasing phishing attempts. The compromised accounts, particularly those with the coveted blue checkmark, can be used to spread false information, promote scams, and expand phishing campaigns to other users.</p>
<p>Furthermore, concerns have been raised about the rise of cybercrime on Twitter, such as the selling of stolen data and the potential for pig butchering fraud, which involves using the platform as a stepping stone to other services and ultimately targeting cryptocurrency.</p>
<p>It&#8217;s worth noting that while Elon Musk has claimed improvements in reducing hate speech and spam on the platform, these assertions lack concrete data to support them. The prevalence of scams and cybercrime, including Ponzi schemes, phishing, account hacking, and bot scams, remains a significant challenge for Twitter users.</p>
<p>In navigating this landscape, users are advised to exercise caution, practice good online hygiene, and be sceptical of unsolicited messages or offers. Implementing two-factor authentication, carefully scrutinizing emails and messages, and reporting suspicious accounts are crucial steps to protect oneself from falling victim to cybercrime on the platform. As Twitter continues to evolve under Elon Musk&#8217;s ownership, vigilance and awareness remains the key to staying safe in this ever-changing digital environment.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/twitters-cybercrime-mess/">Twitter&#8217;s cybercrime mess</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/twitters-cybercrime-mess/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Fake customer service agents, &#8220;Fox8&#8221; botnets: Scammers run riot on X</title>
		<link>https://internationalfinance.com/technology/fake-customer-service-agents-fox8-botnets-scammers-run-riot/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=fake-customer-service-agents-fox8-botnets-scammers-run-riot</link>
					<comments>https://internationalfinance.com/technology/fake-customer-service-agents-fox8-botnets-scammers-run-riot/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 31 Aug 2023 05:29:50 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Bank]]></category>
		<category><![CDATA[crypto]]></category>
		<category><![CDATA[Fraudsters]]></category>
		<category><![CDATA[Micro-Blogging]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Twitter]]></category>
		<category><![CDATA[WhatsApp]]></category>
		<category><![CDATA[X]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=47885</guid>

					<description><![CDATA[<p>X’s terms and conditions do not state whether subscriber accounts are pre-vetted</p>
<p>The post <a href="https://internationalfinance.com/technology/fake-customer-service-agents-fox8-botnets-scammers-run-riot/">Fake customer service agents, &#8220;Fox8&#8221; botnets: Scammers run riot on X</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Consumers complaining of poor customer service on X (rebranded Twitter) are being targeted by scammers after the Elon Musk-led micro-blogging platform changed its account verification process.</p>
<p>Bank customers and airline passengers are among those being exposed to the risk of suffering due to phishing scams as these individuals complain to companies via X, stated a Guardian report, while adding that the fraudsters, masquerading as customer service agents, are responding under fake X handles and trick victims into disclosing their bank details to get a promised refund.</p>
<p>&#8220;They (fraudsters) typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X,&#8221; the report explained the phenomenon.</p>
<p>&#8220;Changes introduced this year allow the icon to be bought by anyone who pays an 11 pounds monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay 950 pounds a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted,&#8221; the report remarked further.</p>
<p>Guardian contacted an individual named Andrew Thomas, who was contacted by a scam account after posting a complaint to the travel platform Booking.com.</p>
<p>“I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he told the media outlet.</p>
<p>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app,” the person added further.</p>
<p>“It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” Thomas said, as he became suspicious and checked the X profile of the &#8216;travel platform&#8217;.</p>
<p>“I then checked the WhatsApp caller ID and found it was a Kenyan number. I’ve since come across other fake Booking.com Twitter accounts which are following customers who are at their wits’ end trying to get a refund and have resorted to X to air their grievance with the company,” he narrated further.</p>
<p>Booking.com has now reportedly refunded Thomas after the incident caught media attention.</p>
<p>In June 2023, passengers suffering from easyJet and British Airways flight cancellations were targeted by cybercriminals using fake profiles after they resorted to X to demand refunds.</p>
<p>Both the airlines informed the Observer about reporting the accounts to X. BA has even a pinned tweet alerting users to fake accounts.</p>
<p>Not only the tourism industry, but even bank customers in the United Kingdom have now reportedly been warned to be vigilant as scammers are on the lookout for tweets that they can exploit to obtain personal account details.</p>
<p>Lisa Webb, a consumer law expert at the campaign organisation Which?, blamed the recent changes to X’s verification processes, which she believed, had made it harder for users to identify trusted accounts.</p>
<p>“Complaining to a company on social media can be an effective tactic to get a quick response, but check to make sure this is coming from its official account and, if in doubt, get in touch with the company directly using the contact details on their official website,” she said, while urging the Rishi Sunak government to pass the online safety bill on an immediate priority basis and ensure “it delivers meaningful protections for consumers against a flood of online fraud infiltrating the world’s biggest social media sites and search engines”.</p>
<p><strong>Threat Actors Running Riot On X</strong></p>
<p>As per the recent study by researchers from Indiana University, around 1,140 AI-powered accounts have been identified on X, which the research team has named the &#8220;Fox8&#8221; botnet. These accounts are reportedly using technology like ChatGPT to create fake content and steal pictures to create fake profiles.</p>
<p>As per the New York Post, these bot accounts are aiming to trick people into investing in fake cryptocurrencies. The Indiana University researchers even suspect that the bots might have stolen from real crypto wallets, while using hashtags like #bitcoin and #crypto and interacting with real human-run accounts focusing on crypto news.</p>
<p>The Fox8 botnet accounts are also spreading misinformation on various topics, including health and politics.</p>
<p>The bots are using the tactics of flooding the micro-blogging platform with numerous AI-generated posts, to increase the chances of these &#8216;posts&#8217; being seen by real X users and ultimately, increasing the likelihood of someone clicking on harmful links.</p>
<p>These bots are not only using stolen photos but are also interacting with each other and have a certain number of followers and friends. The Indiana University researchers further noted that these accounts have improved in their believability due to advancements in language models like ChatGPT.</p>
<p>The post <a href="https://internationalfinance.com/technology/fake-customer-service-agents-fox8-botnets-scammers-run-riot/">Fake customer service agents, &#8220;Fox8&#8221; botnets: Scammers run riot on X</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/fake-customer-service-agents-fox8-botnets-scammers-run-riot/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Staying digitally safe from banking scams</title>
		<link>https://internationalfinance.com/magazine/banking-and-finance-magazine/staying-digitally-safe-from-banking-scams/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=staying-digitally-safe-from-banking-scams</link>
					<comments>https://internationalfinance.com/magazine/banking-and-finance-magazine/staying-digitally-safe-from-banking-scams/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Tue, 06 Jun 2023 05:30:53 +0000</pubDate>
				<category><![CDATA[Banking and Finance]]></category>
		<category><![CDATA[Magazine]]></category>
		<category><![CDATA[ATM]]></category>
		<category><![CDATA[Bank]]></category>
		<category><![CDATA[banking]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[email]]></category>
		<category><![CDATA[malware]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[RaaS]]></category>
		<category><![CDATA[ransomware]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[Skimming]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=47149</guid>

					<description><![CDATA[<p>Technology and banking scams are becoming increasingly sophisticated, and it's essential to be aware of the dangers and take steps to protect yourself</p>
<p>The post <a href="https://internationalfinance.com/magazine/banking-and-finance-magazine/staying-digitally-safe-from-banking-scams/">Staying digitally safe from banking scams</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As technology advances, so do threat actors&#8217; methods to target unsuspecting victims. As a result, banking scams are becoming increasingly common, and it&#8217;s essential to be aware of the dangers and take steps to protect oneself. This article will explore the most common types of 21st-century banking scams and provide tips on how to avoid these crimes.</p>
<p><strong>Phishing scams</strong><br />
Phishing scams are one of the most common ones. Under this method, the threat actors send fraudulent emails, texts, or social media messages from a legitimate source, such as a bank, to get the victim to disclose personal information/login credentials. Once scammers have this information, they can steal money or commit identity theft.</p>
<p>It&#8217;s important to double-check the sender&#8217;s email address or social media handle to avoid falling victim to phishing scams. Keep this simple thing in your mind, legitimate banks and financial institutions never ask for personal information/login credentials over email/social media. If you have doubts about the legitimacy of such emails/messages, contact your bank immediately.</p>
<p>Attacks of this nature are now becoming more frequent and sophisticated. SlashNext, a messaging security company, conducted a study in October 2022, under which it examined billions of link-based URLs, natural language messages, and attachments sent over email, mobile devices, and web browsers over six months and discovered more than 255 million threat elements. That represents a 61% rise in phishing assaults since 2021. </p>
<p>The survey also found an increasing use of personal and mobile communication channels among cybercriminals. Fraud and credential theft topping the list, while the attacks on mobile devices increased by 50%.</p>
<p>According to Jess Burn, senior analyst at Forrester Research, &#8220;We&#8217;ve been seeing an increase in the use of voicemail and text as part of two-pronged phishing and BEC [business email compromise] campaigns.&#8221; </p>
<p>The attackers either give the sender more credibility or make the request seem more urgent by leaving a voicemail or sending a text regarding the email they sent. </p>
<p>Burn said the company is getting a lot of questions from clients concerning BEC (Business Email Compromise) assaults in general. </p>
<p>&#8220;Bad actors are turning to traditional fraud to make money because geopolitical unrest is disrupting ransomware gang activity, and cryptocurrency, the preferred method of ransom payment, is imploding recently,&#8221; he added. BEC is increasing, therefore. </p>
<p>Criminals launch phishing attacks during the sales and tax seasons. People should be cautious of spearphishing, a more specialized variation of phishing that frequently employs topical lures.</p>
<p>Luke McNamara, principal analyst at cyber security consulting firm Mandiant Consulting, said that the topics and themes &#8220;might evolve with global or even seasonal events.&#8221; </p>
<p>&#8220;For instance, given that it is the Christmas season, we can anticipate seeing more phishing lures relating to sales. Threat actors may similarly attempt to abuse users who are filing their taxes during regional tax seasons by sending phishing emails with tax-related subject lines,” the official commented. </p>
<p>According to McNamara, general phishing themes include emails purporting to be from technology vendors about account resets. In contrast, more targeted efforts by threat actors engaged in cyber espionage may use more particular phishing lures. </p>
<p>&#8220;More prolific criminal campaigns might leverage less specific themes,&#8221; he noted.</p>
<p><strong>Recognizing phishing emails</strong><br />
Ask yourself the following questions: </p>
<p>Were you preparing for it? Before responding, clicking a link, or downloading any attached files, take a moment to consider your actions if the communication is from an unknown source. </p>
<p>Who is the message&#8217;s sender? Is this the email address you were hoping for? Cybercriminals may try to deceive you by using a similar email address. Please verify the email address&#8217;s spelling, the domain&#8217;s legitimacy, and whether it corresponds to the sender&#8217;s name. </p>
<p>Does it demand action from you? Phishing emails typically instruct you to click a link, download an attachment, or reply with personal information. They frequently aim to instil a sense of urgency to elicit a hasty and unreasonable response.</p>
<p>Instead of clicking on the links they provide, you should always verify the email&#8217;s legitimacy with information you can obtain independently. While conducting financial activities, avoid clicking on email links and instead log in to your bank account via the official website/app.</p>
<p><strong>Ransomware &#038; malware</strong><br />
Ransomware and malware are malicious programs that can infect your computer, phone, or other devices. This kind of software allows scammers access to your personal information/files, apart from locking you out of your device until you ransom the scammers.</p>
<p>The effects of ransomware attacks are becoming more significant for 21st-century businesses.</p>
<p>As ransomware-as-a-service (RaaS) grows increasingly common, even smaller businesses may now become cybercrime targets. RaaS has made launching software breaches simple and economical, even for inexperienced cyber criminals.</p>
<p>These medium and small businesses are particularly vulnerable as supply chain attacks increase by 663%. A cybercriminal may access the systems and clients’ data with a single malware attack. The scary part is that 70% of these malware attacks also involve ransomware, enabling cybercriminals to demand payments from the targeted companies and their customers.</p>
<p>Businesses must be 24*7 ready for ransomware attacks. Here is what business leaders need to know about protecting their organizations from ransomware in 2023.</p>
<p><strong>Who is susceptible to a ransomware assault?</strong><br />
In the past, when cybercriminals launched a malware assault, they frequently had a particular target in mind.</p>
<p>Cybercriminals wanted to steal large quantities of personally identifiable information (PII) or data with a more excellent resale value, like medical records and financial information, as reselling PII was a significant factor in data breaches. As a result, skilled hackers usually preyed on huge companies with sizable databases containing priceless PII, such as banking and medical institutions in industrialized nations.</p>
<p>Cyberattacks are becoming more common and profitable because of ransomware&#8217;s advent. Threat actors can simply make money by encrypting a company&#8217;s data and extorting payment in exchange for its decryption. In addition, a new threat has emerged in the form of double extortion ransomware assaults, where cybercriminals get the ransom payment and then resell the targeted company’s confidential data on the dark web to increase their profits.</p>
<p>As RaaS gains popularity, the likelihood of a double extortion ransomware assault increases even further. Cybercriminals without technical expertise can now profit from ransomware attacks thanks to RaaS. </p>
<p>RaaS users are now targeting emerging markets rather than developed ones because cybercrime gangs frequently charge higher costs to attack businesses headquartered in wealthy nations.</p>
<p>It is understandable why thieves employ ransomware to steal 10 TB of data each month because of the potential for enormous payments.</p>
<p><strong>Supply chains are rife with ransomware</strong><br />
A significant factor in the rising ransomware risk is the global supply chain.</p>
<p>Most businesses collaborate with hundreds, if not thousands, of outside vendors and service providers, including MSPs (Managed Service Providers) that handle their cybersecurity. However, a cybercriminal only needs one vulnerable endpoint to introduce malicious software into a network or application, placing the business and its customers at risk.</p>
<p>MSPs must safeguard their clients&#8217; IT infrastructure from malware because they oversee their security. An attacker who gains unauthorized access to an MSP&#8217;s network can also readily access the IT infrastructures of the target’s clients. The MSP and its clients are then vulnerable to ransomware attacks.</p>
<p>A 2021 ransomware attack on the MSP software provider Kaseya sought a $70 million ransom payment to restore the data of as many as 70 of the business’s clients. However, because the software stored information on each MSP&#8217;s customers, the assault affected 1,500 companies in at least 17 nations.</p>
<p><strong>Ransomware is widespread now</strong><br />
Anyone can rent professional ransomware tools, purchase instructional DIY kits to create and launch attacks or employ a criminal organization to deploy ransomware assaults, thanks to RaaS. Additionally, RaaS is accessible and economical for nascent cybercriminals because these malicious source codes are available for as little as $39.</p>
<p>To collect RaaS income, several cybercrime gangs are adopting a subscription affiliate model with profit sharing. A threat actor is now paying a monthly subscription to gain access to the ransomware tools, code, and deployment help. The gang automatically takes a portion of the ransom money each time a cybercriminal uses the gang&#8217;s harmful code to retrieve a ransom.</p>
<p>This strategy makes smaller businesses and organizations in developing nations more susceptible to ransomware. These businesses have become vulnerable targets for a new generation of cybercriminals trying to make a profit, even though attacks on these companies are typically not profitable for significant cybercrime gangs. These attacks are inexpensive to deploy, and their attacks are now costing businesses millions of dollars in ransom payments, clean-up expenses, compliance fines, and lost revenue.</p>
<p><strong>How criminals disseminate ransomware</strong><br />
Cybercriminals frequently combine their methods when trying to gain access to IT infrastructure and introduce dangerous ransomware. Others utilize various techniques to locate flaws and obtain credentials to boost their chances of success. At the same time, some may use ransomware assaults in the hopes of discovering zero-day vulnerabilities.</p>
<p>Phishing assaults, undoubtedly the most popular means to steal passwords or spread malicious URLs, increased by 120% in Q3 of 2022. It is customary for cyber attackers to initiate phishing attempts and obtain access to an IT environment before spreading ransomware because stolen credentials are routinely the top cause of breaches.</p>
<p>Cybercriminals frequently target MSPs to access their clients&#8217; systems and spread other ransomware because many MSPs manage access permissions for the methods of their clients.</p>
<p>Knowing cybersecurity trends is only half the battle won</p>
<p>Unfortunately, cybercriminals always seem to be one step ahead when exploiting weaknesses. To stay current, learning about cybersecurity trends like ransomware-as-a-service is essential, but being aware of them is just half the battle won.</p>
<p><strong>ATM skimming</strong><br />
ATM skimming is when scammers place a device on an ATM to capture your card information and PIN as you use the machine. This information is then used to make fraudulent purchases/withdrawals from your account.</p>
<p>To avoid falling victim to ATM skimming, it&#8217;s important to always check the ATM for any signs of tampering, such as loose or extra attachments. Also, cover your hand as you enter your PIN to prevent scammers from visually capturing it.</p>
<p>Skimming is illegally installing equipment on petrol pumps, ATMs, and point-of-sale terminals to steal information, such as card numbers and PINs. With this data, fraudsters can create fake credit or debit cards. According to estimates, skimming results in more than $1 billion in annual financial losses.</p>
<p><strong>Pump skimming for fuel</strong><br />
The typical location of fuel pump skimmers is in the machine&#8217;s internal wiring, out of the customer&#8217;s view. The gadgets used for data collection save information for subsequent wifi or download.</p>
<p><strong>Guidelines to avoid pump skimming</strong><br />
Select a fuel pump closer to the store and in the attendant&#8217;s line of sight. Skimmers are less likely to target these pumps. Use a debit card instead of a credit card. Cover the keypad while entering your PIN. Instead of paying at the pump, think about performing the procedure in another secure premise with the attendant. Contact your bank immediately if you believe you&#8217;ve been a victim of skimming.</p>
<p><strong>ATM and Point of Sale skimming</strong><br />
Devices for ATM skimmers often cover the original card reader. A few skimming gadgets are located near exposed cables, in the terminal, or in the card reader. ATMs with pinhole cameras capture a user entering their PIN. The placement of pinhole cameras varies greatly. When recording PINs, keypad overlays occasionally take the place of pinhole cameras. This is because Keypad overlays keep track of user keystrokes.</p>
<p>Skimming equipment stores information for eventual wireless transfer or download.</p>
<p><strong>Tips to avoid falling prey to such crimes</strong><br />
Before using the cards, check the POS terminals, ATMs, and other card readers. Look for anything that is off-centre, bent, broken, or scraped. If you find anything strange, avoid using card readers. Before inputting your PIN, tug on the keypad&#8217;s edges. Cover the keypad after entering your PIN to prevent cameras from recording your entry. Use ATMs which are indoors, well-lit, and away from any threats. If you are using ATMs in tourist destinations, watch out for skimming devices. Use chip-enabled cards. Devices that steal chip data are less common than those that steal magnetic stripe data. Be cautious while using your debit card with linked accounts. Instead, use a credit card. Immediately contact your bank if the ATM doesn&#8217;t return your card after you cancel a transaction.</p>
<p><strong>Impersonation scams</strong><br />
In this scenario, scammers pose as bank employees/another authority figure to gain your trust and access to your personal information. For example, they may call or email you, claiming to be from your bank, and ask for your personal information/login credentials.</p>
<p>Credit card fraud was one of the most widespread types of fraud in the United States in 2021, according to complaints received by the Federal Trade Commission (FTC). However, that statistic only provides a partial picture of the issue.</p>
<p>The Nilson Report, which tracks the payments sector, predicted that over the next ten years, losses in the United States from card fraud would reach $165.1 billion, affecting every age group. According to Insider Intelligence, only one sort of credit card fraud, card-not-present fraud involving online, over-the-phone, and mail-order transactions, will be responsible for an average estimated $5.72 billion in losses in the world’s largest economy in 2022 and beyond.</p>
<p>When someone uses a credit card to make an illicit purchase, such as purchasing goods on Amazon, this is known as credit card fraud. Other types of credit card fraud include identity theft, using stolen cards, and card-not-present fraud. While credit card fraud is a significant issue, there are precautions to avoid being one of the statistics.</p>
<p><strong>Theft of identity</strong><br />
Identity theft occurs when fraud or another crime is conducted using your personal information, such as your credit card or Social Security number. The Federal Trade Commission received around 1.4 million reports of identity theft in 2021.</p>
<p><strong>Conclusion</strong><br />
Technology and banking scams are becoming increasingly sophisticated, and it&#8217;s essential to be aware of the dangers and take steps to protect yourself. Always remember to be vigilant and never disclose your personal information or login credentials unless you&#8217;re confident you&#8217;re dealing with a legitimate source. Stay safe out there!</p>
<p>The post <a href="https://internationalfinance.com/magazine/banking-and-finance-magazine/staying-digitally-safe-from-banking-scams/">Staying digitally safe from banking scams</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/banking-and-finance-magazine/staying-digitally-safe-from-banking-scams/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>After &#8216;Fake Companies&#8217; row, LinkedIn now faces threat from AI-generated phishing campaigns</title>
		<link>https://internationalfinance.com/technology/after-fake-companies-linkedin-threat-ai-phishing-campaigns/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=after-fake-companies-linkedin-threat-ai-phishing-campaigns</link>
					<comments>https://internationalfinance.com/technology/after-fake-companies-linkedin-threat-ai-phishing-campaigns/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Fri, 24 Feb 2023 03:50:22 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[cyberattacks]]></category>
		<category><![CDATA[cybercriminals]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[LinkedIn]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[Whitepaper]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=46183</guid>

					<description><![CDATA[<p>Even invite-only LinkedIn groups were attacked by these scammers</p>
<p>The post <a href="https://internationalfinance.com/technology/after-fake-companies-linkedin-threat-ai-phishing-campaigns/">After &#8216;Fake Companies&#8217; row, LinkedIn now faces threat from AI-generated phishing campaigns</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Amid the Microsoft-backed OpenAI ChatGPT making waves, researchers have now come across an artificial intelligence-powered malicious ad campaign, targeting the LinkedIn profiles of businesses.</p>
<p>Cybersecurity researchers from SafeGuard Cyber recently came across a LinkedIn advertisement profile promoting a whitepaper that would help sales professionals optimize their operations and close more business deals, reports TechRadar.</p>
<p>The SafeGuard Cyber researchers have described the ad&#8217;s creative as “bizarro”. It features a colour pattern in the lower right corner, which is usually seen on images produced by OpenAI&#8217;s Generative AI model Dall-E.</p>
<p>Dall-E usually works with text-based prompts. Once the user tells the AI model about his/her image requirements, the generative model produces the photo accordingly.</p>
<p>As per the researchers, the ad copy invited the readers to sign up, and in exchange for their personal data to acquire the whitepaper.</p>
<p>The ad creative was set up by an account named “Sales Intelligence”, which the SafeGuard Cyber researchers found suspicious.</p>
<p>Upon investigating further, the company page was found largely blank, and only hosted a link that routed the visitors to an Arizona jewellery store. These researchers are now speculating that the link was just added to fill the mandatory fields in order to set up the page. The whitepaper was found to be non-existent as well.</p>
<p>People signing up for the product may end up sharing their personal details hosted on LinkedIn, such as email and contact numbers, with the threat actors. These details can later be used in different phishing and social engineering attacks.</p>
<p>“Encountering this fake LinkedIn ad was a significant reminder of new social engineering dangers now appearing when coupled with Generative AI,” the researchers said.</p>
<p>While the researchers focused on the image, they also think that the ad copy is most likely an AI-generated one.</p>
<p>This news comes four months after the ‘Fake Companies’ phenomenon on the job hunting site.</p>
<p>A KrebsOnSecurity blog in October 2022 said that the cyber attackers were using artificial intelligence and creating bogus profiles and stealing job descriptions from the original accounts.</p>
<p>The tech website examined a series of such profiles back then, with all of them claiming to be Chief Information Security Officer (CISO) roles at various Fortune 500 companies, including Biogen, Chevron, ExxonMobil and Hewlett Packard.</p>
<p>KrebsOnSecurity double-verified the statements from LinkedIn users and readers, while forming its assessment.</p>
<p>Even invite-only LinkedIn groups were attacked by these scammers.</p>
<p>As per Cybersecurity firm Mandiant, cybercriminals were using these bogus accounts to get into cryptocurrency firms, before drying up the target companies’ funds. Experts had even touted this as some sort of romance scam, where victims are lured into fake crypto platforms.</p>
<p>There has been recent evidence of threat actors using fake LinkedIn accounts to spread malware and viruses, with a focus on the crypto sector.</p>
<p>In response to the KrebsOnSecurity report, LinkedIn in 2022 informed about introducing the concept of domain verification in order to combat such menaces.</p>
<p>The post <a href="https://internationalfinance.com/technology/after-fake-companies-linkedin-threat-ai-phishing-campaigns/">After &#8216;Fake Companies&#8217; row, LinkedIn now faces threat from AI-generated phishing campaigns</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/after-fake-companies-linkedin-threat-ai-phishing-campaigns/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
