In 2025, reports emerged about cybercriminals using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data.
According to the FBI, threat actors have been contacting current and former federal and state officials through fake voice and text messages claiming to be from trusted sources. These scammers then attempt to establish rapport before directing victims to malicious websites to extract passwords and other private information.
Apart from cautioning about the hackers’ tendency to compromise one official’s account, the FBI believes these threat actors may use that access to impersonate the victims further and target others within their network. Verifying identities, avoiding unsolicited links, and enabling multifactor authentication to protect sensitive accounts will be even more crucial.
The FBI and cybersecurity experts are recommending examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.
An evolving threat
Essentially, we are talking about scams where sophisticated AI is used to create highly convincing audio, images, text, or videos that look, sound, and act like real people. The easy availability of this technology practically gives fraudsters access to Hollywood-style special effects, enabling bad actors to commit deepfake fraud at scale. The World Bank reports that deepfake fraud has surged by 900% in recent years. Losses fuelled by generative AI are on track to reach $40 billion by 2027.
Deepfake fraud has become troubling because of its highly realistic nature, accessibility to fraudsters, and scalability. Generative artificial intelligence and deepfakes are making existing types of fraud, such as new account fraud, account takeover, phishing, impersonations, and social engineering, even more costly. While voice-cloning deepfakes have successfully targeted several global businesses, video-based deepfakes are empowering criminal groups like the Yahoo Boys with compelling romance scams.
Consider this: Generative AI rapidly creates images that appear ‘realistic’ with almost zero imperfections, eliminating telltale signs of deepfakes such as strange-looking fingers, distorted faces, or stretched-out arms. To make matters worse, using cloud computing, criminals can launch multiple attacks simultaneously or create a large volume of synthetic content for a targeted campaign, such as spear-phishing fraud.
Generative AI and deepfakes are already being incorporated into several common frauds. This includes “New Account Opening Fraud,” where criminals use deepfake technology with synthetic videos, audio, or images that appear to be a legitimate person opening a new bank account. From there, they can bypass facial recognition or liveness detection measures. By mimicking an account holder’s appearance, voice, and mannerisms, fraudsters can convince a customer service representative to grant them access to someone else’s account.
Spelling and grammar mistakes were once obvious red flags of phishing scams. However, thanks to GenAI, criminals are less likely to make these errors. Fraudsters can now craft persuasive phishing messages that are grammatically correct, contextually relevant, and have perfect spelling.
Fraudsters can also convincingly imitate individuals in professional settings, such as meetings or legal proceedings, to commit fraud. In personal settings, they can pretend to be a loved one in need of financial or medical help, as in a romance or grandparent scam. Synthetic identities (fake identities created by combining real and fictitious information) are now appearing to look like real people. These synthetic identities are defrauding businesses and other individuals.
In Hong Kong, a financial worker was tricked into paying out $25 million when fraudsters used deepfake technology to impersonate the company’s CFO. In Italy, a group of entrepreneurs was targeted by scammers earlier in 2025, who copied the Defence Minister Guido Crosetto’s voice and requested money to help pay the ransom of journalists kidnapped overseas.
At least one victim paid €1 million to an overseas account. WPP Digital CEO Mark Read said United Kingdom-based scammers unsuccessfully used a combination of a voice clone and YouTube footage to schedule a meeting with themselves and ad company executives in 2024.
Video-based deepfake frauds make impersonation-based fraud, like romance scams, even more difficult to catch. In 2024, American consumers lost an estimated $1.14 billion to romance scams. With deepfake technology, scammers can create a large library of fake online suitors. Aided by advanced large language models (LLMs) like LoveGPT, romance scammers can target multiple victims at the same time.
Manipulating publicly available images to commit romance scams has proven effective. In 2024, a scammer used simpler technology to deceive a French woman into believing she was in a relationship with Brad Pitt. Organised romance scam groups like the Yahoo Boys are creating more personalised communication for their targets in real time, making romance scams even more convincing and likely to succeed.
Even tech boss Elon Musk couldn’t save himself from being deepfaked. In 2024, there were reports of AI-powered videos posing as genuine footage of the Tesla and X (formerly Twitter) boss going viral. The New York Times dubbed deepfake “Musk, the Internet’s biggest scammer.”
Steve Beauchamp, an 82-year-old retiree, told the New York Times that he drained his retirement fund and invested $690,000 in such a scam over several weeks, convinced that a video he had seen of Musk was real. His money soon vanished without a trace.
“Now, whether it was AI making him say the things that he was saying, I really don’t know. But as far as the picture, if somebody had said, Pick him out of a lineup, that’s him. Looked just like Elon Musk, sounded just like Elon Musk, and I thought it was him,” Beauchamp told the NYT.
Deepfake-powered videos can fuel other impersonation tactics like “CEO fraud” or grandparent scams. If the target believes they are interacting with the real person, they are more inclined to follow their instructions to help their company or a family member.
While audio and visual manipulation have emerged as critical components behind the deepfakes’ success, the rest depends on trust. Here, psychological manipulation from social engineering is working wonders for cybercriminals.
By scouring information like social media profiles, compromised data, or other sensitive information, fraudsters create specific scenarios that emotionally trigger their targets and quickly gain their attention and trust. The more detailed a story the scammer presents, the more believable it is.
Businesses and banks may see a rise in highly personalised “scams as a service” tactics. Criminals can purchase pre-configured deepfake materials for a specific target (a bank manager or executive), in addition to accessing information like email lists to gain intel on any financial organisation’s internal hierarchy.
Money and trust getting eroded
In a 2024 Deloitte poll, 25.9% of executives revealed that their organisations had experienced one or more deepfake incidents targeting financial and accounting data in the 12 months prior, while 50% of all respondents said they expected a rise in attacks over the following 12 months.
The United States Financial Crimes Enforcement Network (FinCEN) issued an alert in 2024 to help financial institutions identify fraud schemes that use deepfake media created with GenAI tools.
The network observed an increase in suspicious activity reports from financial institutions describing the suspected use of deepfake media in fraud schemes targeting their institutions and customers, beginning in 2023 and continuing into 2024.
Deloitte’s Centre for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States by 2027. To make matters worse, digital trust is “crumbling” under an avalanche of synthetic media, misinformation, and deepfake fraud, according to a new report from Jumio.
The firm’s fourth annual “Jumio Online Identity Study” surveyed 8,001 adult consumers split equally between the United States, Mexico, the United Kingdom, and Singapore. They have much in common: namely, a growing fear that AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft, and a corresponding rise in skepticism about anything and everything online.
“Fraud-as-a-service (FaaS) ecosystems have erupted like a bad rash, enabling even amateur fraudsters to leverage synthetic identities, deepfake videos, and botnet-driven account takeovers. Consumers must navigate scam emails, manipulated social media content, and digitally altered identity documents. Seven out of ten global consumers (69%) indicated they are more skeptical of the content they see online due to AI-generated fraud than they were last year,” the report noted.
When asked who they trust most to protect their personal data, 93% of respondents said they trust themselves over the government or Big Tech.
However, Jumio said, “Self-reliance does not mean consumers want to go it alone. In fact, when asked who should be most responsible for stopping AI-powered fraud, 43% pointed to Big Tech, compared to just 18% who chose themselves.”
The research further showed that consumers are open to modernised fraud protection, even if it means additional steps. Most respondents globally said they would be willing to spend more time completing comprehensive identity verification processes, especially in sectors where the stakes are high, like banking or healthcare.”
But it also recognises that technology alone is not the answer. Jumio CEO Robert Prigge said, “Building a trustworthy digital world depends on strong consumer education and transparency. With day-to-day worries about generative algorithmic technologies on the rise, the trust gap also continues to grow proportionally. As such, businesses must also earn consumer trust in these protections.”
The age of paranoia kicks in
Nicole Yelland, who works in public relations for a Detroit-based nonprofit, now conducts a multi-step background check whenever she receives a meeting request from someone she doesn’t know. Yelland runs the person’s information through Spokeo, a personal data aggregator. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call— with their camera on.
If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigmarole any time someone reaches out to me,” she said to WIRED.
In a time when remote work and distributed teams have become commonplace, professional communication channels are no longer safe, thanks to the GenAI-powered scams. The same AI tools that tech companies use to boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.
Big Tech journalist Lauren Goode said, “On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment-related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.”
Yelland says the scammers who approached her in January 2025 were impersonating a real company, one with a legitimate product. The “hiring manager” she corresponded with over email also seemed legit, even sharing a slide deck outlining the responsibilities of the role they were advertising.
However, during the first video interview, Yelland says, the scammers refused to turn their cameras on during a Microsoft Teams meeting and made unusual requests for detailed personal information, including her driver’s license number. Realising she’d been duped, Yelland slammed her laptop shut.
These schemes have forced AI players to work on technologies to detect other AI-enabled deepfakes, including GetReal Labs and Reality Defender. OpenAI CEO Sam Altman also runs an identity-verification startup called “Tools for Humanity,” which makes eye-scanning devices that capture a person’s biometric data, create a unique identifier for their identity, and store that information on the blockchain. The whole idea behind it is proving “personhood,” or that someone is a real human.
“A section of corporate professionals is also turning to old-fashioned social engineering techniques to verify every fishy-seeming interaction they have. Welcome to the age of paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off,” Goode stated.
Daniel Goldman, a blockchain software engineer and former startup founder, said, “What’s funny is, the lo-fi approach works.”
Goldman began changing his own professional behaviour after he heard a prominent figure in the crypto world had been convincingly deepfaked on a video call.
He ended up warning his close ones that even if they hear “his voice” or “see him” on a video call asking for money or an internet password, they should hang up and email him first before doing anything.
Ken Schumacher, founder of the recruitment verification service Ropes, has worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their resume, such as their favourite coffee shops and places to hang out. Another verification tactic being used by people is what Schumacher calls the “phone camera trick.”
Here, if someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.
However, it’s safe to say this approach can also be off-putting: Honest job candidates may be hesitant to show off the inside of their homes or offices, or worry a hiring manager is trying to learn details about their personal lives.
“Everyone is on edge and wary of each other now,” Schumacher says, and it perfectly sums up the mood change people are undergoing in the age of GenAI-powered scams.
As deepfakes grow more advanced and accessible, AI-driven scams are reshaping cybercrime. Traditional security is no longer enough; vigilance, identity checks, and robust cybersecurity frameworks are the need of the hour to counter this rising threat.
