<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Deepfake Archives - International Finance</title>
	<atom:link href="https://internationalfinance.com/tag/deepfake/feed/" rel="self" type="application/rss+xml" />
	<link>https://internationalfinance.com/tag/deepfake/</link>
	<description>International Finance - Financial News, Magazine and Awards</description>
	<lastBuildDate>Tue, 23 Dec 2025 13:56:51 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>How to spot a deepfake video: Here is the tutorial</title>
		<link>https://internationalfinance.com/technology/how-spot-deepfake-video-here-tutorial/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-spot-deepfake-video-here-tutorial</link>
					<comments>https://internationalfinance.com/technology/how-spot-deepfake-video-here-tutorial/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Tue, 23 Dec 2025 13:56:51 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI-generated Videos]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Deepfake]]></category>
		<category><![CDATA[Fake Videos]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[TikTok]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54249</guid>

					<description><![CDATA[<p>Deepfakes are already appearing on social media feeds and can be broadcast on television news segments</p>
<p>The post <a href="https://internationalfinance.com/technology/how-spot-deepfake-video-here-tutorial/">How to spot a deepfake video: Here is the tutorial</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As the 21st century global socio-economic order embraces <a href="https://globalbusinessoutlook.com/technology/new-threat-for-businesses-major-artificial-intelligence-agents-are-being-spoofed/"><strong>artificial intelligence</strong></a> (AI), a new type of video called a deepfake has emerged, which uses the same technology to mimic the appearance and sound of an actual video or audio recording, even when the events depicted never happened. These convincing yet fake videos often show public figures making false statements, apart from placing someone’s face into another person’s body, or even replicating a person’s voice to deliver messages they never spoke. In short, a tech-powered weapon that will only increasingly get used by threat actors in the coming days, to pull off online crimes.</p>
<p>Also, deepfakes are being used to spread political lies, manipulate financial markets, and even give fake medical advice that could put people’s lives at risk. For example, AI-generated “doctors” have appeared on TikTok, dispensing dangerous health guidance, complete with a fabricated backstory and digitally generated faces.</p>
<p><a href="https://globalbusinessoutlook.com/technology/deepfake-fraud-growing-concern-businesses/"><strong>Deepfakes</strong></a> are already appearing on social media feeds and can be broadcast on television news segments. Experts warn that they could even make their way into commercial advertising, blurring the line between marketing and manipulation. And as technology keeps getting better, these mischievous acts will only get harder to spot and stop. However, no crime is perfect, and even the most sophisticated deepfakes usually contain subtle errors that, upon observation, can reveal their true nature.</p>
<p><strong>Watch The Eyes, Not The Mouth</strong></p>
<p>Eyes are surprisingly hard to fake properly. In deepfake videos, people may blink too little, too much, or in weird patterns. Sometimes the eyes look lifeless, like they’re staring through the camera instead of reacting to the moment. If something feels “off,” but you cannot explain why, it is often the eyes.</p>
<p><strong>Check The Lighting And Shadows</strong></p>
<p>Lighting usually gives deepfakes away. Shadows might fall in the wrong direction or change slightly when the face moves. The skin might look evenly lit, even when the environment should not allow it. Real lighting is messy. Fake lighting is often too clean.</p>
<p><strong>Look Closely For Facial Glitches</strong></p>
<p>This part takes a second viewing. Watch the edges of the face, the jawline, and around the hair. You might notice blurring, warping, or tiny glitches when the head turns. Sometimes the skin looks too smooth, like a filter that forgot to turn off.</p>
<p><strong>Pay Attention To Lip Sync</strong></p>
<p>Deepfakes are better at syncing lips now, but they still slip. Words might land a fraction of a second late, or the mouth shapes do not fully match the sounds. If you mute the video and watch the mouth, the mismatch becomes more obvious.</p>
<p><strong>Notice How The Head And Body Move</strong></p>
<p>Faces get all the attention, but bodies are harder to fake. The head might move unnaturally stiff, or the neck does not move quite right. Sometimes the face moves independently of the body, which your brain notices even if you don’t consciously think about it.</p>
<p><strong>Listen To The Voice Carefully</strong></p>
<p>Voices can sound flat, robotic, or emotionally wrong. Pauses may feel unnatural, or emphasis lands in strange places. Even when the voice sounds realistic, something about the rhythm can feel slightly off.</p>
<p><strong>Watch Emotional Reactions</strong></p>
<p>This is a big one. Real emotions are messy. Deepfakes often miss micro-expressions, those tiny reactions that happen before someone speaks. The smile comes too late. The anger feels rehearsed. The face doesn’t fully match the moment.</p>
<p>The post <a href="https://internationalfinance.com/technology/how-spot-deepfake-video-here-tutorial/">How to spot a deepfake video: Here is the tutorial</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/how-spot-deepfake-video-here-tutorial/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deepfake fallout: Welcome to the age of paranoia</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=deepfake-fallout-welcome-to-the-age-of-paranoia</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Wed, 13 Aug 2025 07:47:15 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Deepfake]]></category>
		<category><![CDATA[email]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[Social Engineering]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=53207</guid>

					<description><![CDATA[<p>In Hong Kong, a financial worker was tricked into paying out $25 million when fraudsters used deepfake technology to impersonate the company’s CFO</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/">Deepfake fallout: Welcome to the age of paranoia</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="ai-optimize-54 ai-optimize-introduction"><span data-preserver-spaces="true">In 2025, reports emerged about cybercriminals using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data.</span></p>
<p class="ai-optimize-55"><span data-preserver-spaces="true">According to the FBI, threat actors have been contacting current and former federal and state officials through fake voice and text messages claiming to be from trusted sources. These scammers then attempt to establish rapport before directing victims to malicious websites to extract passwords and other private information.</span></p>
<p class="ai-optimize-56"><span data-preserver-spaces="true">Apart from cautioning about the hackers&#8217; tendency to compromise one official’s account, the FBI believes these threat actors may use that access to impersonate the victims further and target others within their network. Verifying identities, avoiding unsolicited links, and enabling multifactor authentication to protect sensitive accounts will be even more crucial.</span></p>
<p class="ai-optimize-57"><span data-preserver-spaces="true">The FBI and cybersecurity experts </span><span data-preserver-spaces="true">are recommending</span><span data-preserver-spaces="true"> examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.</span></p>
<p class="ai-optimize-58"><strong><span data-preserver-spaces="true">An evolving threat</span></strong></p>
<p class="ai-optimize-59"><span data-preserver-spaces="true">Essentially, we are talking about scams where sophisticated AI </span><span data-preserver-spaces="true">is used to create</span> <span data-preserver-spaces="true">highly convincing</span><span data-preserver-spaces="true"> audio, images, text, or videos that look, sound, and act like real people. The easy availability of this technology practically gives fraudsters access to Hollywood-style special effects, enabling bad actors to commit deepfake fraud at scale. The World Bank reports that deepfake fraud has surged by 900% in recent years. Losses fuelled by generative AI </span><span data-preserver-spaces="true">are on track to</span><span data-preserver-spaces="true"> reach $40 billion by 2027.</span></p>
<p class="ai-optimize-60"><span data-preserver-spaces="true">Deepfake fraud has become troubling because of its highly realistic nature, accessibility to fraudsters, and scalability. Generative artificial intelligence and deepfakes are making existing types of fraud, such as new account fraud, account takeover, phishing, impersonations, and social engineering, even more costly. While voice-cloning deepfakes have successfully targeted several global businesses, video-based deepfakes are empowering criminal groups like the Yahoo Boys with compelling romance scams.</span></p>
<p class="ai-optimize-61"><span data-preserver-spaces="true">Consider this: Generative AI rapidly creates images that appear &#8216;realistic&#8217; with almost zero imperfections, eliminating telltale signs of deepfakes such as strange-looking fingers, distorted faces, or stretched-out arms. To make matters worse, using cloud computing, criminals can launch multiple attacks simultaneously or create a large volume of synthetic content for a targeted campaign, such as spear-phishing fraud.</span></p>
<p class="ai-optimize-62"><span data-preserver-spaces="true">Generative AI and deepfakes are already being incorporated into several common frauds. This includes &#8220;New Account Opening Fraud,&#8221; where criminals use deepfake technology with synthetic videos, audio, or images that appear to be a legitimate person opening a new bank account. From there, they can bypass facial recognition or liveness detection measures. By mimicking an account holder’s appearance, voice, and mannerisms, fraudsters can convince a customer service representative to grant them access to someone else’s account.</span></p>
<p class="ai-optimize-63"><span data-preserver-spaces="true">Spelling and grammar mistakes were once obvious red flags of phishing scams. However, thanks to GenAI, criminals are less likely to make these errors. Fraudsters can now craft persuasive phishing messages that are grammatically correct, contextually relevant, and have perfect spelling.</span></p>
<p class="ai-optimize-64"><span data-preserver-spaces="true">Fraudsters can also convincingly imitate individuals in professional settings, such as meetings or legal proceedings, to commit fraud. In personal settings, they can pretend to be a loved one </span><span data-preserver-spaces="true">in need of</span><span data-preserver-spaces="true"> financial or medical help, as in a romance or grandparent scam. Synthetic identities (fake identities created by combining real and fictitious information) are now appearing to look like real people. These synthetic identities are defrauding businesses and other individuals.</span></p>
<p class="ai-optimize-65"><span data-preserver-spaces="true">In Hong Kong, a financial worker was tricked into paying </span><span data-preserver-spaces="true">out</span><span data-preserver-spaces="true"> $25 million when fraudsters used deepfake technology to impersonate the company’s CFO. In Italy, a group of entrepreneurs was targeted by scammers earlier in 2025, who copied the Defence Minister Guido Crosetto’s voice and requested money to help pay the ransom of journalists kidnapped overseas.</span></p>
<p class="ai-optimize-66"><span data-preserver-spaces="true">At least one victim paid €1 million to an overseas account. WPP Digital CEO Mark Read said United Kingdom-based scammers unsuccessfully used a combination of a voice clone and YouTube footage to schedule a meeting with themselves and ad company executives in 2024.</span></p>
<p class="ai-optimize-67"><span data-preserver-spaces="true">Video-based deepfake frauds make impersonation-based fraud, like romance scams, even more difficult to catch. In 2024, American consumers lost an estimated $1.14 billion to romance scams. With deepfake technology, scammers can create a large library of fake online suitors. Aided by advanced large language models (LLMs) like LoveGPT, romance scammers can target multiple victims at the same time.</span></p>
<p class="ai-optimize-68"><span data-preserver-spaces="true">Manipulating publicly available images to commit romance scams has proven effective. In 2024, a scammer used simpler technology to deceive a French woman into believing she was in a relationship with Brad Pitt. Organised romance scam groups like the Yahoo Boys are creating more personalised communication for their targets in real time, making romance scams even more convincing and likely to succeed.</span></p>
<p class="ai-optimize-69"><span data-preserver-spaces="true">Even tech boss Elon Musk couldn&#8217;t save himself from being deepfaked. In 2024, there were reports of AI-powered videos posing as genuine footage of the Tesla and X (formerly Twitter) boss going viral. The New York Times dubbed deepfake “Musk, the Internet’s biggest scammer.”</span></p>
<p class="ai-optimize-70"><span data-preserver-spaces="true">Steve Beauchamp, an 82-year-old retiree, told the New York Times that he drained his retirement fund and invested $690,000 in such a scam over several weeks, convinced that a video he had seen of Musk was real. His money soon vanished without a trace.</span></p>
<p class="ai-optimize-71"><span data-preserver-spaces="true">“Now, whether it was AI making him say </span><span data-preserver-spaces="true">the things that</span><span data-preserver-spaces="true"> he was saying, I </span><span data-preserver-spaces="true">really</span><span data-preserver-spaces="true"> don’t know. But as far as the picture, if somebody had said, Pick him out of a lineup, that’s him. Looked just like Elon Musk, sounded just like Elon Musk, and I thought it was him,” Beauchamp told the NYT.</span></p>
<p class="ai-optimize-72"><span data-preserver-spaces="true">Deepfake-powered videos can fuel other impersonation tactics </span><span data-preserver-spaces="true">like</span><span data-preserver-spaces="true"> &#8220;CEO fraud&#8221; or grandparent scams. If the target believes they are interacting with </span><span data-preserver-spaces="true">the</span><span data-preserver-spaces="true"> real person, they are more inclined to follow their instructions to help their company or a family member.</span></p>
<p class="ai-optimize-73"><span data-preserver-spaces="true">While audio and visual manipulation have emerged as critical components behind the deepfakes&#8217; success, the rest depends on trust. Here, psychological manipulation from social engineering is working wonders for cybercriminals.</span></p>
<p class="ai-optimize-74"><span data-preserver-spaces="true">By scouring information like social media profiles, compromised data, or other sensitive information, fraudsters create specific scenarios that emotionally trigger their targets and quickly gain their attention and trust. </span><span data-preserver-spaces="true">The more detailed </span><span data-preserver-spaces="true">a story the scammer presents</span><span data-preserver-spaces="true">, the more believable it is.</span></p>
<p class="ai-optimize-75"><span data-preserver-spaces="true">Businesses and banks may see a rise in highly personalised “scams as a service” tactics. </span><span data-preserver-spaces="true">Criminals can purchase pre-configured deepfake materials for a specific target (a bank manager or executive)</span><span data-preserver-spaces="true">, in addition to accessing</span><span data-preserver-spaces="true"> information like email lists to gain intel on any financial organisation’s internal hierarchy.</span></p>
<p class="ai-optimize-76"><strong><span data-preserver-spaces="true">Money and trust </span><span data-preserver-spaces="true">getting</span><span data-preserver-spaces="true"> eroded</span></strong></p>
<p class="ai-optimize-77"><span data-preserver-spaces="true">In a 2024 Deloitte poll, 25.9% of executives revealed that their organisations had experienced one or more deepfake incidents targeting financial and accounting data in the 12 months prior, while 50% of all respondents said they expected a rise in attacks over the following 12 months.</span></p>
<p class="ai-optimize-78"><span data-preserver-spaces="true">The United States Financial Crimes Enforcement Network (FinCEN) issued an alert in 2024 to help financial institutions identify fraud schemes that use deepfake media created with GenAI tools.</span></p>
<p class="ai-optimize-79"><span data-preserver-spaces="true">The network observed </span><span data-preserver-spaces="true">an increase in</span><span data-preserver-spaces="true"> suspicious activity reports from financial institutions describing the suspected use of deepfake media in fraud schemes targeting their institutions and customers, beginning in 2023 and continuing into 2024.</span></p>
<p class="ai-optimize-80"><span data-preserver-spaces="true">Deloitte’s Centre for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States by 2027. To make matters worse, digital trust is “crumbling” under an avalanche of synthetic media, misinformation, and deepfake fraud, according to a new report from Jumio.</span></p>
<p class="ai-optimize-81"><span data-preserver-spaces="true">The firm’s fourth annual &#8220;Jumio Online Identity Study&#8221; surveyed 8,001 adult consumers split equally between the United States, Mexico, the United Kingdom, and Singapore. </span><span data-preserver-spaces="true">They have much in common: </span><span data-preserver-spaces="true">namely,</span><span data-preserver-spaces="true"> a growing fear that AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft, and a corresponding rise in </span><span data-preserver-spaces="true">skepticism</span><span data-preserver-spaces="true"> about anything and everything online.</span></p>
<p class="ai-optimize-82"><span data-preserver-spaces="true">&#8220;Fraud-as-a-service (FaaS) ecosystems have erupted like a bad rash, enabling even amateur fraudsters to leverage synthetic identities, deepfake videos, and botnet-driven account takeovers. Consumers must navigate scam emails, manipulated social media content, and digitally altered identity documents. Seven out of ten global consumers (69%) indicated they are more </span><span data-preserver-spaces="true">skeptical</span><span data-preserver-spaces="true"> of the content they see online due to AI-generated fraud than they were last year,&#8221; the report noted.</span></p>
<p class="ai-optimize-83"><span data-preserver-spaces="true">When asked who they trust most to protect their </span><span data-preserver-spaces="true">personal</span><span data-preserver-spaces="true"> data, 93% of respondents said they trust themselves over the government or Big Tech.</span></p>
<p class="ai-optimize-84"><span data-preserver-spaces="true">However, Jumio said, “Self-reliance does not mean consumers want to go it alone. </span><span data-preserver-spaces="true">In fact,</span><span data-preserver-spaces="true"> when asked who should be most responsible for stopping AI-powered fraud, 43% pointed to Big Tech, compared to just 18% who chose themselves.”</span></p>
<p class="ai-optimize-85"><span data-preserver-spaces="true">The research further showed that consumers are open to modernised fraud protection, even if it means additional steps. Most respondents globally said they would be willing to spend more time completing comprehensive identity verification processes, especially in sectors where the stakes are high, like banking or healthcare.&#8221;</span></p>
<p class="ai-optimize-86"><span data-preserver-spaces="true">But it also recognises that technology alone is not the answer. Jumio CEO Robert Prigge said, “Building a trustworthy digital world depends on strong consumer education and transparency. </span><span data-preserver-spaces="true">With </span><span data-preserver-spaces="true">day-to-day</span><span data-preserver-spaces="true"> worries about generative algorithmic technologies on the rise, the trust gap </span><span data-preserver-spaces="true">also</span><span data-preserver-spaces="true"> continues to grow proportionally.</span><span data-preserver-spaces="true"> As such, businesses must also earn consumer trust in these protections.”</span></p>
<p class="ai-optimize-87"><strong><span data-preserver-spaces="true">The age of paranoia kicks in</span></strong></p>
<p class="ai-optimize-88"><span data-preserver-spaces="true">Nicole Yelland, who works in public relations for a Detroit-based nonprofit, now conducts a multi-step background check whenever she receives a meeting request from someone she doesn’t know. Yelland runs the person’s information through Spokeo, a personal data aggregator. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t </span><span data-preserver-spaces="true">quite</span><span data-preserver-spaces="true"> seem right, she’ll ask the person to join a Microsoft Teams call— with their camera on.</span></p>
<p class="ai-optimize-89"><span data-preserver-spaces="true">If Yelland sounds paranoid, that’s because she is. </span><span data-preserver-spaces="true">In January, </span><span data-preserver-spaces="true">before she started her current nonprofit role,</span><span data-preserver-spaces="true"> Yelland says</span><span data-preserver-spaces="true">, </span><span data-preserver-spaces="true">she got roped into an elaborate scam targeting job seekers.</span><span data-preserver-spaces="true"> &#8220;Now, I do the whole verification rigmarole any time someone reaches out to me,” she said to WIRED.</span></p>
<p class="ai-optimize-90"><span data-preserver-spaces="true">In a time when remote work and distributed teams have become commonplace, professional communication channels are no longer safe, thanks to the GenAI-powered scams. The same AI tools that tech companies use to boost worker productivity </span><span data-preserver-spaces="true">are also making</span><span data-preserver-spaces="true"> it easier for criminals and fraudsters to construct fake personas in seconds.</span></p>
<p class="ai-optimize-91"><span data-preserver-spaces="true">Big Tech journalist Lauren Goode said, &#8220;On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment-related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.&#8221;</span></p>
<p class="ai-optimize-92"><span data-preserver-spaces="true">Yelland says the scammers who approached her in January 2025 were impersonating a real company</span><span data-preserver-spaces="true">, one</span><span data-preserver-spaces="true"> with a legitimate product.</span><span data-preserver-spaces="true"> The “hiring manager” she corresponded with over email also seemed legit, even sharing a slide deck outlining the responsibilities of the role they were advertising.</span></p>
<p class="ai-optimize-93"><span data-preserver-spaces="true">However, during the first video interview, Yelland says, the scammers refused to turn their cameras on during a Microsoft Teams meeting </span><span data-preserver-spaces="true">and made</span><span data-preserver-spaces="true"> unusual requests for detailed personal information, including her driver’s license number. Realising she’d been duped, Yelland slammed her laptop shut.</span></p>
<p class="ai-optimize-94"><span data-preserver-spaces="true">These schemes have forced AI players to work on technologies to detect other AI-enabled deepfakes, including GetReal Labs and Reality Defender. OpenAI CEO Sam Altman also runs an identity-verification startup called &#8220;Tools for Humanity,&#8221; which makes eye-scanning devices that capture a person’s biometric data, create a unique identifier for their identity, and store that information on the blockchain. The whole idea behind it is proving “personhood,” or that someone is a real human.</span></p>
<p class="ai-optimize-95"><span data-preserver-spaces="true">&#8220;A section of corporate professionals is also turning to old-fashioned social engineering techniques to verify every fishy-seeming interaction they have. Welcome to the age of paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words </span><span data-preserver-spaces="true">with each other</span><span data-preserver-spaces="true">, so they </span><span data-preserver-spaces="true">have a way to</span><span data-preserver-spaces="true"> ensure they’re not being misled if an encounter feels off,&#8221; Goode stated.</span></p>
<p class="ai-optimize-96"><span data-preserver-spaces="true">Daniel Goldman, a blockchain software engineer and former startup founder, said, &#8220;What’s funny is, the lo-fi approach works.&#8221;</span></p>
<p class="ai-optimize-97"><span data-preserver-spaces="true">Goldman began changing his </span><span data-preserver-spaces="true">own</span><span data-preserver-spaces="true"> professional behaviour after he heard</span><span data-preserver-spaces="true"> a prominent figure in the crypto world had been convincingly deepfaked on a video call.</span></p>
<p class="ai-optimize-98"><span data-preserver-spaces="true">He </span><span data-preserver-spaces="true">ended up warning</span><span data-preserver-spaces="true"> his close ones that even if they hear &#8220;his voice&#8221; or &#8220;see him&#8221; on a video call asking for money or an internet password, they should hang up and email him </span><span data-preserver-spaces="true">first</span><span data-preserver-spaces="true"> before doing anything.</span></p>
<p class="ai-optimize-99"><span data-preserver-spaces="true">Ken Schumacher, founder of the recruitment verification service Ropes, has worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their </span><span data-preserver-spaces="true">resume</span><span data-preserver-spaces="true">, such as their favourite coffee shops and places to hang out. Another verification tactic </span><span data-preserver-spaces="true">being used by people</span><span data-preserver-spaces="true"> is what Schumacher calls the “phone camera trick.”</span></p>
<p class="ai-optimize-100"><span data-preserver-spaces="true">Here, if someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.</span></p>
<p class="ai-optimize-101"><span data-preserver-spaces="true">However, it’s safe to say this approach can also be off-putting: Honest job candidates may be hesitant to show off the inside of their homes or offices, or worry a hiring manager is trying to learn details about their personal lives.</span></p>
<p class="ai-optimize-102"><span data-preserver-spaces="true">“Everyone is on edge and wary of each other now,” Schumacher says, and it perfectly sums up the mood change people are undergoing in the age of GenAI-powered scams.</span></p>
<p class="ai-optimize-103"><span data-preserver-spaces="true">As deepfakes </span><span data-preserver-spaces="true">grow</span><span data-preserver-spaces="true"> more advanced and accessible, AI-driven scams are reshaping cybercrime. Traditional security is no longer enough; vigilance, identity checks, and robust cybersecurity frameworks are the need of the hour </span><span data-preserver-spaces="true">to counter this rising threat</span><span data-preserver-spaces="true">.</span></p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/">Deepfake fallout: Welcome to the age of paranoia</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/deepfake-fallout-welcome-to-the-age-of-paranoia/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IF Insights: Is Telegram in trouble post Pavel Durov’s arrest?</title>
		<link>https://internationalfinance.com/technology/if-insights-telegram-trouble-post-pavel-durovs-arrest/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=if-insights-telegram-trouble-post-pavel-durovs-arrest</link>
					<comments>https://internationalfinance.com/technology/if-insights-telegram-trouble-post-pavel-durovs-arrest/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 05 Sep 2024 06:12:56 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Dark Web]]></category>
		<category><![CDATA[Deepfake]]></category>
		<category><![CDATA[France]]></category>
		<category><![CDATA[internet]]></category>
		<category><![CDATA[Law Enforcement]]></category>
		<category><![CDATA[Pavel Durov]]></category>
		<category><![CDATA[Russia]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Telegram]]></category>
		<category><![CDATA[WhatsApp]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=50780</guid>

					<description><![CDATA[<p>With the help of the Telegram software, users can have one-on-one chats, group chats, and broadcast messages to a large number of subscribers through channels</p>
<p>The post <a href="https://internationalfinance.com/technology/if-insights-telegram-trouble-post-pavel-durovs-arrest/">IF Insights: Is Telegram in trouble post Pavel Durov’s arrest?</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The CEO and creator of the messaging service Telegram, <a href="https://internationalfinance.com/business-leaders/business-leader-telegram-founder-pavel-durov-uaes-richest-expat/"><strong>Pavel Durov</strong></a>, was detained in Paris over the weekend on suspicion of using his platform for illegal activities such as the dissemination of pictures of child abuse and the sale of drugs.</p>
<p>Durov has multiple citizenships spanning France, Russia, the Caribbean island nation of St. Kitts and Nevis, and the United Arab Emirates (UAE). He was born in Russia but lived much of his youth in Italy. After arriving from Azerbaijan, he was detained at Paris-Le Bourget Airport in France and freed following four days of interrogation. According to the Paris prosecutor&#8217;s office, he was told to go to a police station twice a week and was compelled to pay 5 million euros in bail.</p>
<p>Telegram maintained that it complies with European Union (EU) regulations and that its content filtering is &#8220;within industry standards and constantly improving&#8221; in a statement that was uploaded to its platform. The business continued by saying Durov &#8220;has nothing to hide and travels across Europe frequently.&#8221;</p>
<p>Here are some specifics about the Telegram app, which is the reason behind Durov&#8217;s detention.</p>
<p><strong>What is Telegram?</strong></p>
<p>With the help of the Telegram software, users can have one-on-one chats, group chats, and broadcast messages to a large number of subscribers through &#8220;channels.&#8221; In contrast to competitors like Meta&#8217;s <a href="https://internationalfinance.com/technology/whatsapp-communities-rolls-out-beta-users-all-you-need-know/"><strong>WhatsApp</strong></a>, Telegram supports up to 200,000 users in group chats, while WhatsApp only supports 1,024 users. Experts are worried that in group discussions, big, false information might spread quickly.</p>
<p>In contrast to widespread belief, Telegram does not automatically enable end-to-end encryption for user communications. Users must activate the option. Group conversations are not compatible with it either. This is in contrast to Facebook Messenger and rival Signal, where conversations are always end-to-end encrypted.</p>
<p>&#8220;Group conversations and channels—two popular Telegram features—are not end-to-end encrypted,&#8221; said John Scott-Railton, a senior researcher at Citizen Lab at the University of Toronto, explaining that their contents can be accessed through Telegram.</p>
<p>Similarly, user-to-user conversations are not end-to-end encrypted by default, which leaves Telegram with access to them as well. The only end-to-end encrypted function on Telegram is the opt-in &#8220;secret chat&#8221; option, which keeps Telegram from viewing the chat data.</p>
<p>According to Telegram, there are over 950 million active users. It is a popular messaging app in France, and some officials from the presidential palace and the ministry overseeing Durov&#8217;s probe use it as well. However, French police have also discovered that drug dealers and Islamic extremists have utilised the programme.</p>
<p>In 2013, Durov and his brother Nikolai founded Telegram. Pavel Durov backs the app &#8220;financially and philosophically, whereas Nikolai&#8217;s input is technological,&#8221; according to Telegram.</p>
<p>Durov established the biggest social network in Russia, VKontakte, before Telegram. The business faced pressure as a result of the Russian government&#8217;s crackdown following the large-scale pro-democracy demonstrations that shook Moscow at the end of 2011 and 2012.</p>
<p>According to Durov, representatives of the administration ordered VKontakte to remove the internet networks run by Russian opposition activists. Later, it demanded that the site provide the personal information of users who participated in the 2013 Ukrainian rebellion that resulted in the removal of a pro-Kremlin president.</p>
<p>However, in 2014, under pressure from Russian authorities, Durov sold his interest in VKontakte. He departed the nation as well. Currently headquartered in Dubai, Durov described the city as &#8220;the finest position for a neutral platform like ours to be in if we want to make sure we can preserve our users&#8217; privacy and freedom of speech&#8221; during an April 2024 interview with Tucker Carlson, host of a conservative talk programme.</p>
<p><strong>Why Did Durov Get Arrested?</strong></p>
<p>French officials detained Durov and charged him with a misdemeanour for permitting suspected illegal activities on Telegram. They also prohibited him from leaving the country while they conducted additional inquiries. Durov is accused of allowing drug trafficking and child abuse materials to be distributed on his platform, and of Telegram refusing to provide information or documents to law enforcement when asked to do so.</p>
<p>According to the prosecutor&#8217;s office, the first preliminary complaint against him was for &#8220;complicity in maintaining an online platform to allow unlawful transactions by an organized gang,&#8221; a crime that carries a maximum sentence of 10 years in prison and a fine of 500,000 euros.</p>
<p>French law defines preliminary charges as a magistrate&#8217;s strong suspicion of a crime, with the option to continue the inquiry at a later date.</p>
<p><strong>South Korea Gets Tough</strong></p>
<p>South Korean police have now launched an investigation into Telegram over deepfake online sex crimes, reported the Yonhap news agency.</p>
<p>South Korean authorities have called on Telegram and other social media platforms for cooperation in fighting sexually explicit deepfake content. A broadcaster reported in August 2024 about university students running an illegal Telegram chatroom, sharing deepfake pornographic material of female classmates, one of a slew of high-profile cases that have stoked public anger.</p>
<p>&#8220;In light of these (deepfake) crimes, the Seoul National Police Agency launched their probe last week&#8230; for abetting the crimes,&#8221; said Woo Jong-soo, head of the investigation bureau at the National Police Agency, according to a transcript of a press briefing.</p>
<p>Police received 88 reports of deepfake porn last week alone, Woo said, adding they have identified 24 suspects. As per the AFP, the authorities have pledged to &#8220;find ways to cooperate with various investigative bodies, including the French, to enhance&#8221; their investigation into the platform.</p>
<p>As per the activists, South Korea is reportedly suffering from &#8220;an epidemic of digital sex crimes,&#8221; including those involving spycams and revenge porn, with inadequate legislation to punish offenders.</p>
<p>Perpetrators of deepfake crimes have reportedly used social media platforms such as Instagram to save/screen-capture photos of victims, which were then used to create fake pornographic material.</p>
<p><strong>Dark Web Allegations Against Telegram</strong></p>
<p>Telegram&#8217;s lack of content filtering has drawn criticism from Western governments on several occasions. Experts claim this exposes the messaging app to possible uses in drug trafficking, money laundering, and the transmission of content related to the exploitation of kids.</p>
<p>In contrast to other messaging apps, David Thiel, a researcher at Stanford University&#8217;s Internet Observatory who has studied the use of online platforms for child exploitation, claimed that Telegram is &#8220;less secure (and) more lax in terms of policy and detection of unlawful information.&#8221;</p>
<p>Additionally, Thiel stated that WhatsApp, a messaging software, &#8220;submitted over 1.3 million CyberTipline reports in 2023 (while) Telegram submits none,&#8221; and that Telegram &#8220;appears basically unresponsive to law enforcement.&#8221;</p>
<p>Due to the Telegram operators&#8217; noncompliance with German legislation, Germany fined them 5.125 million euros (USD 5 million at the time). According to the Federal Office of Justice, Telegram has not designated a German company to receive official correspondence or established a legal mechanism for reporting illegal content.</p>
<p>Under German rules governing big internet platforms, both are necessary.</p>
<p>Due to Telegram&#8217;s refusal to provide information on neo-Nazi behaviour linked to a police investigation into school shootings in November 2023, Brazil temporarily blocked the messaging app.</p>
<p>As per Joe Tidy, Cyber correspondent, BBC World Service, criminals generally like the dark web because of the anonymity it provides: internet traffic is bounced around the world, obscuring people&#8217;s locations. Citing Researchers at cyber-security company Intel471, he said, “pre-Telegram this activity (cybercrime) was predominantly done in online markets hosted using hidden dark web services but for lower-level, lesser-skilled cyber-criminals, Telegram has become one of the most popular online destinations”.</p>
<p>The hacker group Qilin, which held United Kingdom&#8217;s NHS hospitals to ransom recently, notably chose to publish stolen blood test data on its Telegram channel before its dark web website. The deepfake service used to create fake vulgar images of teenagers in Spain and South Korea also runs its full service, including payment, on Telegram.</p>
<p>In January 2024, state police in Latvia set up a separate unit specialising in monitoring chat apps for drug trafficking and communication, and officials have named Telegram as a particular concern.</p>
<p>On &#8220;Chila Abuse Materials,&#8221; Telegram says that its content moderation is “within industry standards”, but BBC has found evidence to the contrary related to &#8220;an area of criminality far less visible.&#8221;</p>
<p>The BBC learnt that while Telegram does respond to some takedown requests from police and charities, it does not participate in programmes aimed at proactively preventing the spread of images and videos of child abuse. Not doing enough to police CSAM has been one of the allegations French prosecutors have brought against the platform.</p>
<p>“At the heart of this case is the lack of moderation and co-operation of the platform, in particular in the fight against crimes against children,” said Jean-Michel Bernigaud, the secretary general of French child protection agency Ofmin, on LinkedIn.</p>
<p>Moderation is not the only part of the problem for Telegram. The platform&#8217;s approach to police requests to remove illegal content and pass on evidence is another criticism.</p>
<p>Brian Fishman, a co-founder of Cinder, a software platform for trust and safety, posted, “Telegram is another level: it has been the key hub for Isis for a decade. It tolerates CSAM. It&#8217;s ignored reasonable law enforcement engagement for years. It’s not &#8216;light&#8217; content moderation; it’s a different approach entirely.”</p>
<p>&#8220;Some might argue that Telegram’s privacy features mean that the company does not have much data about this activity to report to police. This is the case with ultra-private apps like Signal and WhatsApp. Telegram offers users similar levels of privacy if they opt to create a &#8216;Secret Chat&#8217; which uses the same end-to-end encryption that those apps do. It means the activity inside a conversation is completely private and not even Telegram itself can view the contents. However, this function is not set as default on Telegram, and it seems that most of the activity on the app &#8211; including on those illicit channels I was added to &#8211; are not set as secret,&#8221; Joe Tidy noted.</p>
<p>Telegram could read all content and pass it on to the police if it wanted to, but it states in its terms and conditions that it does not. In June 2024, Pavel Durov told journalist Tucker Carlson that he only employs “about 30 engineers” to run his platform. Telegram’s cold approach to law enforcement is something that Tidy cited by frustrated police officers on the fringes of press events.</p>
<p>French authorities also noted in their statements about Mr Durov’s charges that police there and in Belgium had historically an “almost total lack of response from Telegram to legal requests”.</p>
<p>The post <a href="https://internationalfinance.com/technology/if-insights-telegram-trouble-post-pavel-durovs-arrest/">IF Insights: Is Telegram in trouble post Pavel Durov’s arrest?</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/if-insights-telegram-trouble-post-pavel-durovs-arrest/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
