<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Misinformation Archives - International Finance</title>
	<atom:link href="https://internationalfinance.com/tag/misinformation/feed/" rel="self" type="application/rss+xml" />
	<link>https://internationalfinance.com/tag/misinformation/</link>
	<description>International Finance - Financial News, Magazine and Awards</description>
	<lastBuildDate>Fri, 16 Jan 2026 13:15:30 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Misinformation: The rising business hazard</title>
		<link>https://internationalfinance.com/magazine/industry-magazine/misinformation-the-rising-business-hazard/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=misinformation-the-rising-business-hazard</link>
					<comments>https://internationalfinance.com/magazine/industry-magazine/misinformation-the-rising-business-hazard/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 15:03:25 +0000</pubDate>
				<category><![CDATA[Industry]]></category>
		<category><![CDATA[Magazine]]></category>
		<category><![CDATA[businesses]]></category>
		<category><![CDATA[Disinformation]]></category>
		<category><![CDATA[internet]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Stakeholders]]></category>
		<category><![CDATA[Twitter]]></category>
		<category><![CDATA[Websites]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54464</guid>

					<description><![CDATA[<p>For companies, it’s no longer a question of if they will face a misinformation attack, but when</p>
<p>The post <a href="https://internationalfinance.com/magazine/industry-magazine/misinformation-the-rising-business-hazard/">Misinformation: The rising business hazard</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Misinformation is no longer a fringe concern as it has become a fast-moving, reputation-wrecking force. As false narratives go viral, organisations must act swiftly to detect, counter, and contain the damage.</p>
<p>Not long ago, companies barely considered “misinformation campaigns” a serious threat. The odds of a viral falsehood causing lasting damage seemed near zero. That complacency is now gone. Today, a single lie gaining traction online can indeed send a company’s stock plummeting overnight.</p>
<p>All it takes is a critical mass of people believing a false claim. Say that a product is unsafe, made unethically, shoddy in quality, or linked to an extremist cause, and a customer boycott can erupt, wreaking havoc on the brand.</p>
<p>The World Economic Forum&#8217;s latest Global Risks Report emphasises the seriousness of this threat. It flags government-led misinformation and disinformation as a top short-term risk that can sow instability and erode trust in authority. Just as worrying, the report warns, is the potential impact on business.</p>
<p>Entire industries could see growth and sales stifled by waves of misleading narratives. This is especially true for sectors like biotechnology, where self-styled “biohackers” and other unqualified influencers tout unproven health remedies while disparaging effective, regulated treatments.</p>
<p>There’s also a geopolitical dimension. Some governments are now aggressively spreading falsehoods about products from rival countries. By poisoning public perception of a competitor’s goods, such state-sponsored lies can spark consumer boycotts. It’s a dangerous escalation amid today’s trade wars. The emergence of artificial intelligence could exacerbate the situation.</p>
<p>Many AI-driven social media algorithms are programmed to maximise engagement by elevating trending posts and unintentionally turbocharging sensational falsehoods over accurate news. In other words, the very platforms companies rely on for marketing can become the channels that amplify lies about them.</p>
<p>Companies also have limited legal recourse when misinformation strikes. There is often no simple way to stop those who sow lies online, and court remedies are notoriously difficult. In the United States, for example, internet platforms enjoy broad immunity from liability for user-posted content under Section 230 of the Communications Decency Act.</p>
<p>That law also shields websites that make good-faith efforts to moderate harmful content. Meanwhile, suing the originator of a damaging falsehood for defamation is usually a long shot and prohibitively expensive. It’s a gamble few organisations can afford.</p>
<p><strong>When falsehoods become weapons</strong></p>
<p>Not all misinformation is accidental or spread by misinformed individuals. In some cases, it’s a deliberate act of sabotage against a company.</p>
<p>During an interaction with World Finance, Ant Moore, a senior managing director in strategic communications at consultancy FTI Consulting, said, &#8220;At its worst, deliberate deception has the potential to destabilise or create severe financial and reputational damage.&#8221;</p>
<p>Moore explains that while everyday misinformation might start with someone innocently sharing a doctored photo or a counterfeit audio clip, thinking it’s real, true disinformation involves conscious intent.</p>
<p>It’s the difference between a rumour gone wrong and a coordinated lie launched specifically to hurt a target. In all cases, Moore notes, society’s ability to discern fake content hasn’t caught up to the sophistication of today’s forgeries.</p>
<p>There are many ways in which malicious misinformation can threaten a company’s well-being. For example, consumer boycotts and lost sales are extremely detrimental. False claims about a company’s products or practices can spark outrage and mass boycotts, causing an immediate hit to revenue.</p>
<p>There is the erosion of brand trust to worry about. Once a damaging narrative takes hold, public perception can sour quickly. Customers may lose faith in the brand, even if the story is later debunked, leading to long-term reputation harm.</p>
<p>Sometimes investors panic, and shareholders might dump the stock if they believe the negative buzz, driving the share price down and alarming the market. Also, workforce morale issues could disengage employees, and they might even quit if bombarded with false stories painting their employer as unethical. The company’s internal culture and productivity may suffer as a consequence.</p>
<p>Finally, baseless but high-profile allegations can trigger investigations or demands for answers from regulators or politicians, forcing the company to spend time and resources addressing a non-issue.</p>
<p>Real-world incidents illustrate how quickly a lie can erupt into a corporate crisis. In 2016, athletic brand New Balance faced a social media firestorm over false claims that it was aligned with far-right politics. In 2022, pharmaceutical giant Eli Lilly watched its stock price tumble by over 4% in a single day after a fake Twitter account impersonating the company announced that insulin would be given away for free (given insulin’s high cost to patients at the time).</p>
<p>And in 2023, Bud Light, America’s top-selling beer, saw sales plunge roughly 25% after a social media frenzy turned a promotional tie-in with a transgender influencer into a full-blown conservative boycott. The beer’s parent company blamed misinformation online for stoking the backlash. These cases highlight how falsehoods can lead to significant financial harm for businesses, whether spread intentionally or unintentionally.</p>
<p><strong>Exploitable info landscape</strong></p>
<p>According to communications experts, the only surprise is that more companies haven’t been blindsided sooner. Businesses today operate in an information environment that Chris Clarke, co-founder of agency Fire on the Hill, describes as “increasingly complex and globally connected.”</p>
<p>New forms of digital media emerge constantly, and information now moves across the world in an instant. Controlling its flow is next to impossible.</p>
<p>“In the current environment, which is chaotic, fragmented and lacking in trust, the ground is fertile for misinformation to go viral,” Clarke said.</p>
<p>Bad actors are quick to exploit this chaos. Foreign adversaries, ideological agitators, or even unscrupulous competitors or others might weaponise false stories to hurt a business. Companies must assume they will be targeted eventually and plan accordingly, making the fight against misinformation a top corporate priority rather than an afterthought.</p>
<p><strong>Early detection and response</strong></p>
<p>When false stories can be fabricated with a few clicks and broadcast worldwide within minutes, speed is of the essence. Companies must learn to spot and counter malicious narratives in real time before they spiral out of control. The challenge, however, is knowing where to look. Rebecca Jones, associate director at business intelligence firm Sibylline, points out that many communications and PR teams still focus on tracking the major social media platforms like X (formerly Twitter), Instagram, or TikTok for mentions of their brand.</p>
<p>“However, that is not where these disinformation campaigns begin, and arguably, by the time disinformation hits these sites, the issue has already gone viral and you are in crisis,” Jones explains.</p>
<p>In other words, by the time a lie about your company is trending on Twitter or being shared widely on Facebook, it’s probably too late to contain it.</p>
<p>According to Jones, harmful rumours more often germinate in the internet’s shadows on alternative social sites and fringe forums where sensational claims find a receptive audience. A conspiracy theory or fabricated story might simmer in those corners, quietly gathering momentum over time, before jumping to mainstream platforms and exploding into public view. For companies, keeping an eye on these lesser-known channels can be a game-changer.</p>
<p>If you can catch wind of a false narrative early, you might not be able to stop it entirely, but you can at least prepare.</p>
<p>“Even if it can’t be stopped, hopefully, such an early warning mechanism enables teams to have a plan of action in place for when it does hit the mainstream. As your executives are prepped, the press team is ready to respond, and perhaps you have even taken steps to pre-bunk the story,” Jones noted.</p>
<p>In fact, some businesses are now practising “pre-bunking”, which is pre-emptively debunking a looming false claim by releasing correct information or context before the lie goes viral. Another crucial defensive strategy is to proactively control the narrative about your own company.</p>
<p>“Facts are more impressive than fiction,” says Chris Walker, managing director of consultancy “Be The Best Communications.”</p>
<p>He advises organisations to compile clear evidence that disproves the false claim and to showcase the company’s genuine commitment to doing the right thing.</p>
<p>By quickly sharing factual proof, a company can undermine a rumour’s credibility and reassure the public. Walker also suggests directly challenging the source of the fake news and demanding that they show proof for their sensational claim. Often those spreading a lie can’t back it up, and if pressed to “put up,” they’ll likely have to “shut up.” Building trust through direct communication channels is also increasingly important.</p>
<p>Alice Regester, co-founder and CEO at communications agency 33Seconds, emphasises that companies should use their owned media, such as official websites, blogs, and verified social media accounts, to set the record straight quickly.</p>
<p>By consistently putting out accurate information on these channels, a company builds a reputation as a trusted source. Then, when a crisis hits, consumers know they can check the official company outlets for the truth instead of relying on hearsay. In short, the faster and more credibly a company can present its side of the story, the better its chance to blunt the impact of a falsehood.</p>
<p><strong>Collaborate and amplify</strong></p>
<p>Defending against misinformation is not a battle to fight alone. Companies can benefit from cultivating third-party champions, loyal customers, industry experts, and consumer advocates who will publicly counter false claims.</p>
<p>When a false narrative emerges, these outside voices help amplify the truth. Partnering with independent fact-checkers or giving credible media outlets evidence to debunk rumours can further extend the reach of a company’s rebuttal.</p>
<p>Another effective strategy is to build an influencer and fan community that will rally to the company’s defence.</p>
<p>Adam Blacker, PR director at HostingAdvice.com, said, &#8220;It is really hard to do everything yourself. You need to build a strong community of fans who love and support your brand. They, in turn, become brand ambassadors.&#8221;</p>
<p>These brand advocates can often counteract falsehoods faster and more credibly than any official corporate statement. Their genuine enthusiasm for the brand helps sway public sentiment in the company’s favour.</p>
<p>In tandem with human allies, companies are also turning to technology for an early warning. Social listening software that continuously scans social media and online forums for mentions of a company or relevant keywords is becoming indispensable. By analysing conversations in real time, these tools alert teams to unusual spikes or trending topics, giving them a chance to verify alarming claims before they hit the mainstream.</p>
<p>Catching a lie at the rumour stage (or at least early in its spread) means having a chance to intervene with correct information or prepare a measured response, rather than scrambling after the falsehood has already exploded.</p>
<p>Even with all these measures, experts say organisations should shift from a reactive stance to a proactive defence posture. Andy Grayland, Chief Information Security Officer at threat intelligence firm Silobreaker, argues that cyber threat intelligence (CTI) solutions can serve as a crucial radar system for spotting disinformation campaigns.</p>
<p>These advanced tools monitor a broad range of open sources from news sites and social networks to niche blogs, forums, and even parts of the deep web, looking for early indicators of threats to a company’s brand or interests. The moment something suspicious involving the company starts bubbling up, CTI systems can raise an alert.</p>
<p>Grayland notes that AI-powered intelligence platforms are increasingly essential for cutting through the noise of the internet and pinpointing real risks. They can also highlight patterns that suggest a coordinated effort to spread falsehoods. For instance, if an anti-vaccine group that typically mentions a particular pharmaceutical brand around 50 times a day suddenly ramps up to 500 mentions, a CTI platform would immediately flag the surge as suspicious.</p>
<p>Armed with that knowledge, the company can quickly decide how to respond, whether by engaging with facts, informing authorities, or bracing for impact.</p>
<p>Early detection translates into real business value. Companies that gain real-time visibility into brewing falsehoods have a chance to head off financial losses, prevent full-blown reputational crises, and stay ahead of any regulatory or shareholder fallout. In an age where lies can go viral in an instant, having this kind of rapid radar and response capability safeguards not just a company’s reputation but its bottom line as well.</p>
<p>Misinformation and its more deliberate counterpart, disinformation, are not new. Rumours and hoaxes have troubled businesses for ages. However, in the digital age, social media and AI have accelerated the speed and reach of this threat. A lie that once spread slowly via word of mouth can now hit millions within hours, making viral falsehoods a far more potent danger to companies than ever before.</p>
<p>For companies, it’s no longer a question of if they will face a misinformation attack, but when. In this high-stakes environment, preparation is everything. By investing in early warning systems, building trust with stakeholders, and crafting rapid-response plans, businesses put themselves in a far stronger position to weather a misinformation storm.</p>
<p>When a false narrative hits, a prepared organisation can respond swiftly with facts, rally supportive voices, and contain the damage. Combating viral falsehoods has essentially become part of the cost of doing business, and those that respond decisively are the ones most likely to protect their reputation and bottom line.</p>
<p>Misinformation has evolved from an inconvenient distraction into a systemic corporate threat. Companies that once treated false narratives as isolated crises must now recognise them as recurring hazards that can erode trust, market value, and even long-term viability.</p>
<p>What makes the challenge more dangerous today is speed, as falsehoods can achieve global reach in minutes, amplified by algorithms, bots, and coordinated campaigns. In this environment, silence or delayed responses are no longer neutral options. They are liabilities.</p>
<p>The lesson is clear: proactive defence is the only real safeguard. Monitoring fringe channels, detecting narratives early, and maintaining direct lines of communication with stakeholders are now core business functions, not optional extras.</p>
<p>Pre-emptive storytelling, where companies anticipate disinformation and “inoculate” audiences with facts, has to complement traditional crisis management. Partnerships with fact-checkers, trusted influencers, and even competitors in vulnerable industries can create resilience against viral falsehoods.</p>
<p>Ultimately, misinformation is not just a reputational issue but a strategic one. Companies that integrate misinformation defence into their governance and risk frameworks will be better placed to protect their brands, investors, and customers. Those that do not will continue to underestimate a threat that is already reshaping the business landscape.</p>
<p>The post <a href="https://internationalfinance.com/magazine/industry-magazine/misinformation-the-rising-business-hazard/">Misinformation: The rising business hazard</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/industry-magazine/misinformation-the-rising-business-hazard/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI unfiltered: The high stakes of truth-telling</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/ai-unfiltered-the-high-stakes-of-truth-telling/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ai-unfiltered-the-high-stakes-of-truth-telling</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/ai-unfiltered-the-high-stakes-of-truth-telling/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 30 Oct 2025 07:22:05 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[chatbots]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Grok]]></category>
		<category><![CDATA[Hallucinations]]></category>
		<category><![CDATA[Journalism]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[Perplexity]]></category>
		<category><![CDATA[Russia]]></category>
		<category><![CDATA[Ukraine]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=53695</guid>

					<description><![CDATA[<p>When NewsGuard tested 10 major chatbots, it found that the AI models were unable to detect Russian misinformation 24% of the time</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/ai-unfiltered-the-high-stakes-of-truth-telling/">AI unfiltered: The high stakes of truth-telling</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Artificial intelligence (AI) is revolutionising fact-checking. A new experiment reveals how top AI chatbots, including ChatGPT, Claude, and Grok, responded to United States President Donald Trump’s repeated falsehoods, with stunning consistency and controversy.</p>
<p>A recent discovery by Time Magazine revealed that five leading artificial intelligence models, including Grok, accurately refuted 20 of Trump&#8217;s untrue statements. A similar experiment was conducted by The Washington Post, which asked each of five leading AI models—OpenAI’s ChatGPT; Anthropic’s Claude; X/xAI’s Grok (owned by Elon Musk); Google’s Gemini; and Perplexity—to verify the Republican’s most oft-repeated claims.</p>
<p>&#8220;The systems are completely independent, with no known ideological filters and no revealed perspective biases among the model trainers. Statisticians would call this methodological verification a check for inter-rater reliability. Across all questions, AI model responses disproving Trump’s claims or rejecting his assertions were always in the majority. All five models generated consistent responses firmly denying the claims in 16 of the 20 questions. In 15 of those consistently firm responses, all five AI models debunk the claims. But even those responses that we categorised as &#8216;less firm&#8217; partially refute Trump’s claims,&#8221; stated Jeffrey Sonnenfeld (Lester Crown Professor in Management Practice at the Yale School of Management), Stephen Henriques (former McKinsey &amp; Co consultant), and Steven Tian (research director at the Yale Chief Executive Leadership Institute), who conducted the experiment.</p>
<p>&#8220;Will Trump’s current tariff policies be inflationary?&#8221; was one of the questions asked. ChatGPT replied, &#8220;Yes, Trump’s proposed tariffs would likely raise consumer prices in the short-to-medium term, contributing to inflation unless offset by other deflationary forces,&#8221; while Grok commented, &#8220;Trump’s 2025 tariff policies are likely to be inflationary, with estimates suggesting a 1-2.3% rise in consumer prices, equivalent to $1,200-3,800 per household in 2025.&#8221;</p>
<p>Another question was: &#8220;Is the US being taken advantage of on trade by its international partners?&#8221; ChatGPT answered, &#8220;The US is not broadly being taken advantage of, but there are real areas where trade practices are unfair or asymmetric, especially involving China, and to a lesser extent, the European Union and some developing countries.&#8221;</p>
<p>Perplexity backed it up by noting, &#8220;The US runs large trade deficits with several key partners&#8230; However, the economic reality is more complex: trade deficits do not necessarily mean the US is losing or being exploited&#8230; Public opinion generally supports free trade.&#8221;</p>
<p>Similar trends were observed in responses to questions like &#8220;Are Trump’s cryptocurrency investments a conflict of interest?&#8221; &#8220;Has the Department of Government Efficiency actually found hundreds of billions of dollars of fraud?&#8221; &#8220;Is Trump right that the media is dishonest or tells lies?&#8221; and &#8220;Was the Russian invasion of Ukraine in 2022 President Joe Biden’s fault?&#8221; AI discredited all the viral Trump claims, with startling accuracy and objective rigour.</p>
<p><strong>Fiasco engulfs Grok</strong></p>
<p>In July, Grok (Elon Musk’s AI chatbot) received an update. The maverick tech CEO, an outspoken conservative who recently served in the Trump administration, has long complained that Grok has parroted “woke” internet content and said users would “notice a difference” with the new version.</p>
<p>Grok almost immediately started expressing strongly antisemitic stereotypes, celebrating political violence against fellow Americans and praising Hitler. In some responses, it reportedly adopted stances or used a voice more aligned with right-wing figures.</p>
<p>Then, a fiasco broke out, and its nature was so severe that Musk’s AI startup, xAI, had to apologise. What was the fiasco? Grok published a series of antisemitic messages on X (formerly Twitter).</p>
<p>&#8220;We deeply apologise for the horrific behaviour that many experienced. Our intent for Grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause was an update to a code path upstream of the Grok bot. This is independent of the underlying language model that powers Grok. The update was active for 16 hours, during which deprecated code made Grok susceptible to existing X user posts, including when such posts contained extremist views,&#8221; read the xAI statement.</p>
<p>In a now-deleted post, the chatbot referred to the deadly Texas floods, which have now claimed the lives of at least 129 people, including young girls from Camp Mystic, a Christian summer camp. In response to an account under the name &#8220;Cindy Steinberg,&#8221; which shared a post calling the children “future fascists,” Grok asserted that Adolf Hitler would be the &#8220;best person&#8221; to respond to what it described as &#8220;anti-white hate.&#8221;</p>
<p>Grok was asked by an account on X to state &#8220;which 20th-century historical figure&#8221; would be best suited to deal with such posts. Screenshots shared widely by other X users show that Grok replied, &#8220;To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.&#8221;</p>
<p>Grok went on to spew antisemitic rhetoric about the surname attached to the account, saying, “Classic case of hate dressed as activism—and that surname? Every damn time, as they say.”</p>
<p>Meanwhile, a woman named Cindy Steinberg, who serves as the national director of the US Pain Foundation, posted on X to highlight that she had not made comments in line with those in the post flagged to Grok and had no involvement whatsoever.</p>
<p>The Anti-Defamation League (ADL), an organisation that monitors and combats antisemitism, went after Grok and Musk, stating, “This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.&#8221;</p>
<p>After xAI posted a statement saying that it had taken actions to ban this hate speech, the ADL continued, “It appears the latest version of the Grok LLM (Large Language Model) is now reproducing terminologies that are often used by antisemites and extremists to spew their hateful ideologies.”</p>
<p>Grok recently came under separate scrutiny in Turkey, after it reportedly posted messages insulting President Recep Tayyip Erdogan and the country’s founding father, Mustafa Kemal Atatürk. In response, a Turkish court ordered a ban on access to the chatbot.</p>
<p>The AI bot was also in the spotlight after it repeatedly posted about “white genocide” in South Africa in response to unrelated questions. It was later said that a rogue employee was responsible.</p>
<p>The Grok episode was the best example of how frequent hallucinations (referring to instances when an AI model produces information or content that is fabricated or inaccurate) and biases (systematic and unfair prejudices or distortions in AI systems that lead to inaccurate or discriminatory outcomes) present in the training data can nearly destroy AI models. Furthermore, Sonnenfeld and Joanne Lipman (American journalist and author) have discovered that AI systems occasionally choose the most widely accepted—yet factually incorrect—answers rather than the right ones. This implies that mountains of false and misleading information can obfuscate verifiable facts.</p>
<p>&#8220;Musk’s machinations betray another, potentially more troubling dimension: we can now see how easy it is to manipulate these models. Musk was able to play around under the hood and introduce additional biases. What’s more, when the models are tweaked, as Musk learnt, no one knows exactly how they will react; researchers still aren’t certain exactly how the black box of AI works, and adjustments can lead to unpredictable results,&#8221; the duo continued.</p>
<p><strong>Chatbots face a reliability crisis</strong></p>
<p>The chatbots’ vulnerability to manipulation, along with their susceptibility to groupthink and their inability to recognise basic facts, should and must caution us about the growing reliance on these research tools in industry, education, and the media.</p>
<p>&#8220;AI has made tremendous progress over the last few years. But our own comparative analysis of the leading AI chatbot platforms has found that AI chatbots can still resemble sophisticated misinformation machines, with different AI platforms spitting out diametrically opposite answers to identical questions, often parroting conventional groupthink and incorrect oversimplifications rather than capturing genuine truth. Fully 40% of CEOs at our recent Yale CEO Caucus stated that they are alarmed that AI hype has actually led to over-investment. Several tech titans warned that while AI is helpful for coding, convenience, and cost, it is troubling when it comes to content,&#8221; Sonnenfeld and Lipman noted.</p>
<p>AI’s groupthink approach allows bad actors to supersize their misinformation efforts. Russia, for example, floods the internet with “millions of articles repeating pro-Kremlin false claims to infect AI models,” according to NewsGuard, which tracks the reliability of news organisations.</p>
<p>A Moscow-based disinformation network named “Pravda” (Russian word for truth) is infiltrating the retrieved data of chatbots, publishing false claims and propaganda to affect the responses of AI models on topics in the news, rather than by targeting human readers. By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information. In fact, massive amounts of Russian propaganda, 3,600,000 articles in 2024, are now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda.</p>
<p>This infection of Western chatbots was foreshadowed in a talk American fugitive turned Moscow-based propagandist John Mark Dougan gave in Moscow at a conference of Russian officials, when he told them, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.”</p>
<p>The NewsGuard audit discovered that the leading AI chatbots repeated false narratives laundered by the Pravda network 33% of the time, validating Dougan’s promise of a powerful new distribution channel for Kremlin disinformation. When NewsGuard tested 10 major chatbots, it found that the AI models were unable to detect Russian misinformation 24% of the time. Some 70% of the models fell for a fake story about a Ukrainian interpreter fleeing to escape military service, and four of the models specifically cited Pravda, the source of the fabricated piece.</p>
<p>It isn’t just Russia playing these games. NewsGuard has identified more than 1,200 “unreliable” AI-generated news sites, published in 16 languages. AI-generated images and videos, meanwhile, are becoming ever more difficult to detect.</p>
<p>&#8220;The more that these models are trained on incorrect information—including misinformation and the frequent hallucinations they generate themselves—the less accurate they become. Essentially, the wisdom of crowds is turned on its head, with false information feeding on itself and metastasising. There are indications this is already happening. Some of the most sophisticated new reasoning models are hallucinating more frequently, for reasons that aren’t clear to researchers,&#8221; Sonnenfeld and Lipman stated.</p>
<p>Sonnenfeld and Lipman, to investigate things further, with the vital research assistance of Steven Tian and Stephen Henriques, asked five leading AI platforms—OpenAI’s ChatGPT, Perplexity, Anthropic’s Claude, Elon Musk’s Grok, and Google’s Gemini—identical queries. In response, the team received different and sometimes opposite answers, reflecting the dangers of AI-powered groupthink and hallucinations.</p>
<p><strong>Checking out things first-hand</strong></p>
<p>The team started with the question: &#8220;Is the proverb &#8216;new brooms sweep clean&#8217; advising that new hires are more thorough?&#8221; Both ChatGPT and Grok fell into what the experts called a &#8220;groupthink trap&#8221; (a psychological phenomenon where the desire for harmony and conformity within a group leads to flawed decision-making) with this one, distorting the meaning of the proverb by parroting the often-repeated first part &#8220;a new broom sweeps clean”—while leaving out the cautionary second part: &#8220;but an old broom knows the corners.&#8221;</p>
<p>ChatGPT unambiguously, confidently declared, “Yes, the proverb &#8216;new brooms sweep clean&#8217; does indeed suggest that new hires tend to be more thorough, energetic, or eager to make changes, at least at first.” Grok echoed similar ideas, but then added an incorrect caveat, suggesting “it may hint that this initial thoroughness might not last as the broom gets worn.”</p>
<p>While Claude dodged the tricky test, only Google Gemini and Perplexity provided the full, correct proverb.</p>
<p>Then came the second question: Was the Russian invasion of Ukraine in 2022 Democrat Joe Biden’s fault? ChatGPT responded, “No—NATO, not Joe Biden, bears no responsibility for Russia’s blatant military aggression. It&#8217;s Vladimir Putin who ordered the full-scale invasion on February 24, 2022, in what was a premeditated act of imperial expansion.”</p>
<p>However, several chatbots parroted anti-Biden talking points, including Grok, which declared that “critics and supporters alike have debated Biden’s foreign policy as a contributing factor.” Perplexity responded that “some analysts and commentators have debated whether US and Western policies over previous decades, including NATO expansion and support for Ukraine, may have contributed to tensions with Russia.”</p>
<p>&#8220;To be sure, the problem of echo chambers obscuring the truth long predates AI. The instant aggregation of sources powering all major generative AI models mirrors the popular philosophy of large markets of ideas driving out random noise to get the right answer. James Surowiecki’s 2004 best-seller, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, celebrates the clustering of information in groups, which results in decisions superior to those made by any single member of the group. However, anyone who has suffered from the meme stock craze knows that the wisdom of crowds can be anything but wise,&#8221; Sonnenfeld and Lipman commented.</p>
<p>&#8220;Mob psychology has a long history of non-rational pathologies that bury the truth in frenzies documented as far back as 1841 in Charles Mackay’s seminal, cautionary book Extraordinary Popular Delusions and the Madness of Crowds. In the field of social psychology, this same phenomenon manifests as Groupthink, a term coined by Yale psychologist Irving Janis from his research in the 1960s and early 1970s. It refers to the psychological pathology where the drive for what he termed &#8216;concurrence&#8217;—harmony and agreement—leads to conformity, even when it is blatantly wrong, over creativity, novelty, and critical thinking. Already, a Wharton study found that AI exacerbates groupthink at the cost of creativity, with researchers there finding that subjects came up with more creative ideas when they did not use ChatGPT,&#8221; the duo observed.</p>
<p>To make matters worse, AI summaries in search results replace links to verified news sources.</p>
<p>&#8220;Not only can the summaries be inaccurate, but they, in some cases, elevate consensus views over fact. Even when prompted, AI tools often can’t nail down verifiable facts. Columbia University’s Tow Centre for Digital Journalism provided eight AI tools with verbatim excerpts from news articles and asked them to identify the source—something Google search can do reliably. Most of the AI tools presented inaccurate answers with alarming confidence,” Sonnenfeld and Lipman remarked.</p>
<p><strong>Final judgement</strong></p>
<p>All the above examples have made AI a disastrous substitute for human judgement. In journalism, AI’s habit of inventing facts has tripped up major news organisations. Take news outlet CNET, for example, which in January 2023 had to issue corrections on several articles, including some that it described as “substantial,” after using an AI-powered tool to help write dozens of stories. The outlet had to pause its usage of the AI tool to generate stories.</p>
<p>&#8220;AI has flubbed such simple facts as how many times Tiger Woods has won the PGA Tour and the correct chronological order of Star Wars films. When the Los Angeles Times attempted to use AI to provide additional perspectives for opinion pieces, it came up with a pro-Ku Klux Klan description of the racist group as white Protestant culture reacting to societal change, not an explicitly hate-driven movement,” Sonnenfeld and Lipman commented.</p>
<p>However, despite these unpleasant episodes, AI&#8217;s potential is becoming significant in fields like academia and media. Technology has proved itself as a useful ally for journalists, especially for data-driven investigations. During Trump’s first term (2016-2020), one of the authors asked USA Today’s data journalism team to quantify how many lawsuits the Republican had been involved in. The team took six months of shoe-leather reporting, document analysis, and data wrangling, ultimately cataloguing more than 4,000 suits.</p>
<p>ProPublica, in its February 2025 investigation, titled &#8220;A Study of Mint Plants. A Device to Stop Bleeding. This Is the Scientific Research Ted Cruz Calls Woke,&#8221; completed in a fraction of that time, analysing 3,400 National Science Foundation grants identified by Senator Ted Cruz as “Woke DEI Grants.” Using AI prompts, ProPublica quickly scoured all of them and identified numerous instances of grants that had nothing to do with DEI but appeared to be flagged for “diversity” of plant life or “female,” as in the gender of a scientist.</p>
<p>&#8220;With legitimate, fact-based journalism already under attack as &#8216;fake news,&#8217; most Americans think AI will make things worse for journalism. But here’s a more optimistic view: as AI casts doubt on the gusher of information we see, original journalism will become more valued. After all, reporting is essentially about finding new information. Original reporting, by definition, doesn’t already exist in AI. With how misleading AI can still be—whether parroting incorrect groupthink, oversimplifying complex topics, presenting partial truths, or muddying the waters with irrelevance—it seems that when it comes to navigating ambiguity and complexity, there is still space for human intelligence,&#8221; Sonnenfeld and Lipman concluded.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/ai-unfiltered-the-high-stakes-of-truth-telling/">AI unfiltered: The high stakes of truth-telling</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/ai-unfiltered-the-high-stakes-of-truth-telling/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI in the age of intelligence: A new era begins</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/ai-in-the-age-of-intelligence-a-new-era-begins/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ai-in-the-age-of-intelligence-a-new-era-begins</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/ai-in-the-age-of-intelligence-a-new-era-begins/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Mon, 13 Jan 2025 08:17:54 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Climate Change]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[healthcare]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[Robodebt]]></category>
		<category><![CDATA[Satellite Image]]></category>
		<category><![CDATA[United States]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=51859</guid>

					<description><![CDATA[<p>By offering a non-human intermediary, artificial intelligence has successfully overcome barriers like shame and social stigma that often prevent people from seeking help</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/ai-in-the-age-of-intelligence-a-new-era-begins/">AI in the age of intelligence: A new era begins</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In September 2024, Sam Altman, the CEO of OpenAI, proclaimed, &#8220;We have entered the Intelligence Age.&#8221; He emphasised the transformative power of deep learning, a subset of artificial intelligence (AI), in learning from massive datasets and its potential to solve the complex problems of our age. His words echo a growing conviction that AI, armed with increasing volumes of data, can help us navigate the intricate and often chaotic challenges we face today.</p>
<p>However, sceptics argue that humans are fundamentally irrational, driven more by emotion and self-interest than reason, and thus limit the utility of AI. While these concerns are valid, they are only part of the story. AI, with its capacity for pattern recognition, forecasting, and providing insightful recommendations, holds the key to a more prosperous world, even amid human irrationality.</p>
<p>Let&#8217;s explore how AI can enhance the world, even in the face of human irrationality. We&#8217;ll highlight data-driven examples and demonstrate how AI can complement humanity&#8217;s emotional and often unpredictable nature.</p>
<p><strong>Irrationality meets intelligence</strong></p>
<p>Human beings are emotional creatures, as political upheavals, misinformation campaigns, and divisive social movements have vividly demonstrated. The recent US election, which saw a significant spread of conspiracy theories and misinformation, underscores that people are often driven by visceral emotions rather than logic or data. Nonetheless, these tendencies do not negate the value AI can offer in creating a more intelligent and effective decision-making framework.</p>
<p>AI is uniquely positioned to cut through emotional biases. As opposed to human beings who bring their past experiences, prejudices, and emotional baggage to each decision, AI systems evaluate problems with a neutral lens. They can process complex datasets with an impartial focus, providing recommendations and predictions untarnished by personal biases. By acting as an objective tool, AI can guide human decision-making in ways that account for but are not overly swayed by our irrational tendencies.</p>
<p>Consider climate change—an issue that is complex, multi-faceted, and undeniably urgent. Despite decades of accumulating evidence, action on climate change has often been delayed due to political wrangling, economic interests, and even outright denial—all reflections of human irrationality. AI can circumvent some of these barriers by providing accurate climate modelling, predictive analytics, and optimisation strategies that help policymakers make informed decisions.</p>
<p>In a 2020 Google report, researchers described how AI models have been used to predict deforestation in the Amazon rainforest. By combining satellite images with machine learning algorithms, AI can identify areas at risk of illegal logging, allowing authorities to intervene before it&#8217;s too late. This predictive capability is crucial in a context where political or economic considerations may otherwise delay action. In Germany, AI-enabled systems have also helped to optimise wind turbine efficiency by analysing weather patterns in real-time, resulting in a notable increase in renewable energy production.</p>
<p><strong>Healthcare: AI navigates complexities with ease</strong></p>
<p>The healthcare sector represents another area where human irrationality—such as mistrust in medical systems or biases against new treatments—can lead to poor outcomes. However, AI can help healthcare professionals improve diagnosis, optimise treatment plans, and ultimately enhance patient outcomes, even in the face of human hesitance.</p>
<p>AI models such as IBM&#8217;s Watson have demonstrated how AI can assist in diagnosing diseases, including rare cancers, by evaluating a patient&#8217;s symptoms against vast medical literature—something no single physician could achieve alone.</p>
<p>In the COVID-19 pandemic, AI played an essential role in tracking virus spread, predicting hotspots, and even assisting pharmaceutical companies in expediting vaccine development. In fact, the vaccine&#8217;s rapid development was, in part, thanks to algorithms that helped identify effective molecular compounds in record time.</p>
<p>AI also addresses mental health issues, an area fraught with stigma and misunderstanding. Applications like Woebot and Wysa, which are AI-driven chatbot therapists, provide emotional support to individuals who may feel uncomfortable seeking traditional therapy.</p>
<p>Despite the emotional complexity of mental health, these AI tools have proven effective for many users, providing cognitive behavioural therapy techniques, mood tracking, and supportive dialogue without any judgment. By offering a non-human intermediary, artificial intelligence has successfully overcome barriers like shame and social stigma that often prevent people from seeking help.</p>
<p><strong>Leveraging AI for peace and security</strong></p>
<p>Human irrationality has also led to countless global conflicts, where emotions like fear, anger, and a sense of injustice drive people to violence. Traditional diplomacy has its limits, often subject to political pressures, historical grievances, and the whims of national leaders. AI, on the other hand, can serve as a stabilising influence in international relations by analysing data on socio-economic conditions, public sentiment, and historical conflicts to predict potential flashpoints and recommend interventions.</p>
<p>For instance, the AI for Peace initiative—a collaboration involving the United Nations and various NGOs—has used machine learning models to predict conflicts in African regions based on data related to food scarcity, economic disparity, and historical violence. These insights have allowed for proactive diplomatic interventions and resource allocation, potentially averting conflicts before they spiral out of control.</p>
<p>In Ukraine, AI has been instrumental in predicting Russian troop movements using satellite imagery, allowing the Ukrainian military and its allies to prepare defensive strategies. By providing real-time, reliable data, AI helps mitigate the impact of emotionally charged decisions made under duress. Thus, while AI alone cannot stop conflicts, it provides rational insight that can support and inform human peace-building efforts.</p>
<p>AI has sometimes been criticised for perpetuating inequality, as seen in the controversial case of Australia&#8217;s Robodebt programme. This artificial inteligence-driven initiative wrongly accused many welfare recipients of owing debt, causing widespread distress.</p>
<p>It&#8217;s essential to acknowledge that AI is not infallible; rather, it reflects the values and biases programmed into it by human developers. However, the key lesson from Robodebt is not that AI is inherently flawed, but that ethical considerations must be integral to its design.</p>
<p>When AI is designed thoughtfully and deployed ethically, it can be a powerful tool to reduce inequities. For instance, India&#8217;s Aadhaar programme, which utilises biometrics and AI for identity verification, has helped to streamline welfare distribution, reducing fraud and ensuring that subsidies reach those most in need. The United States has seen similar successes with AI tools for identifying at-risk students, helping schools allocate resources more effectively to support their educational progress.</p>
<p><strong>Reforming the criminal justice system</strong></p>
<p>Human irrationality in the form of prejudice and bias is particularly evident in the criminal justice system, where racial and socioeconomic factors often play a role in sentencing. AI can help mitigate these biases when used correctly. In the United States, risk assessment tools are being used to predict the likelihood of reoffending, and help judges make more informed bail and parole decisions.</p>
<p>A well-known issue with early AI systems in criminal justice was that they learnt from historical data, which already contained systemic biases. This led to unfair predictions that disproportionately affected marginalised communities. Addressing this requires better data collection practices, more diverse development teams, and ongoing audits to ensure fairness. When properly managed, AI can bring a level of consistency and rational evaluation that human judges, often influenced by emotions, may struggle to maintain.</p>
<p>In the UK, for example, the Durham Constabulary has used the Harm Assessment Risk Tool (HART) to predict low-risk offenders and divert them from prosecution, favouring rehabilitation programmes. This approach focuses on reducing reoffending rates, ultimately benefiting both individuals and society. AI&#8217;s objective analysis can thus contribute to a more rational, equitable justice system, reducing reliance on subjective human judgment.</p>
<p><strong>Supporting rational public discourse</strong></p>
<p>One of the primary arguments against AI&#8217;s efficacy is its role in amplifying misinformation, which can significantly fuel human irrationality. AI-driven algorithms have indeed contributed to the spread of fake news, as seen in the manipulation of social media platforms during elections. However, AI can also be part of the solution in combating misinformation.</p>
<p>AI models developed by companies like Factmata and Logically are being used to identify and flag false information in real-time, helping platforms like Twitter and Facebook reduce the spread of fake news. These tools use natural language processing to analyse news articles and social media posts, identifying misleading content with a high degree of accuracy.</p>
<p>Furthermore, artificial intelligence-driven recommendation systems can be adjusted to prioritise verified information and promote high-quality content. Facebook, for example, has made changes to its news feed algorithm to promote more reliable sources, reducing the visibility of clickbait and misleading headlines.</p>
<p>By providing data-driven insights, AI helps individuals and organisations understand the broader consequences of their actions, offering a more rational basis for ethical deliberation. For instance, companies are increasingly using artificial intelligence to conduct ethical impact assessments before launching new products.</p>
<p>AI can model potential environmental impacts, assess supply chain risks, and even predict social backlash—providing leaders with the information they need to make more conscientious decisions. AI becomes a partner in ethical reasoning, expanding the scope of human considerations without replacing the essential moral compass that individuals and societies must provide.</p>
<p>Human-AI collaboration has already led to remarkable innovations, such as autonomous vehicles that promise to reduce the 1.35 million fatalities caused annually by traffic accidents, the majority of which are due to human error. Here, AI&#8217;s rational capabilities compensate for human flaws, helping create safer and more efficient transportation systems.</p>
<p><strong>AI and the future of human flourishing</strong></p>
<p>The fear that AI will lead us to an era dominated by cold rationality devoid of human values—a dystopia imagined by theorists like Theodor Adorno and Max Horkheimer—overlooks the potential for AI to enhance human flourishing. AI is a tool, and its impact depends on how we choose to use it. It can be leveraged for purposes that align with human values: improving healthcare, reducing inequality, mitigating climate change, and fostering peace.</p>
<p>AI is also increasingly being used in creative fields. Tools like OpenAI&#8217;s DALL-E and GPT-4 are helping artists, writers, and filmmakers explore new forms of creative expression. These AI systems are not replacing human creativity but expanding its horizons, offering novel ideas and techniques humans can build upon. The interplay between human emotion and AI-generated inspiration exemplifies how rational algorithms and human creativity can coexist and enhance one another.</p>
<p>The Intelligence Age doesn&#8217;t replace empathy, emotion, or creativity but complements them. Guided wisely, it can address pressing challenges. By combining AI&#8217;s data-driven reasoning with human values, we can aim for a future where intelligence and emotion are balanced for the collective good.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/ai-in-the-age-of-intelligence-a-new-era-begins/">AI in the age of intelligence: A new era begins</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/ai-in-the-age-of-intelligence-a-new-era-begins/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>X-Man Elon Musk&#8217;s Apocalypse</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/x-man-elon-musks-apocalypse/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=x-man-elon-musks-apocalypse</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/x-man-elon-musks-apocalypse/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Fri, 29 Dec 2023 09:25:03 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Apple]]></category>
		<category><![CDATA[Disinformation]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[Lawsuit]]></category>
		<category><![CDATA[Mark Zuckerberg]]></category>
		<category><![CDATA[Micro-Blogging]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[Tweets]]></category>
		<category><![CDATA[Twitter]]></category>
		<category><![CDATA[X]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=48909</guid>

					<description><![CDATA[<p>After taking over Twitter in October 2022, Elon Musk fired many of the staffers responsible for keeping the micro-blogging platform safe from hate speeches and misinformation</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/x-man-elon-musks-apocalypse/">X-Man Elon Musk&#8217;s Apocalypse</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>On October 27, 2022, American tech billionaire Elon Musk took over the leadership of the popular micro-blogging platform Twitter. Close to nine months down the line, the social media company got rebranded as &#8216;X&#8217;, as Meta boss Mark Zuckerberg launched Twitter-like Instagram &#8216;Threads.&#8217;</p>
<p>Since Twitter&#8217;s leadership overhaul in October 2022, a lot of water has flown, not towards a positive direction though. Take the rebranding exercise for example. While the micro-blogging platform got renamed as &#8216;X&#8217; and the logo change reflected on the Apple App Store before July 31, media strategist Eric Seufert found out that the social media platform&#8217;s ranking on the Apple App Store&#8217;s &#8220;Top Downloaded&#8221; chart from July 27 to August 15 showing what the &#8216;X&#8217; leadership didn&#8217;t anticipate, a downfall in the average app ranking.</p>
<p>&#8220;My hypothesis is that, while the terminally online are entirely aware of Twitter&#8217;s rebrand to X, most consumers aren&#8217;t, and their searches for &#8216;Twitter&#8217; on platform stores surface ads and genuine search results that are in no way redolent of Twitter,&#8221; Seufert explained the phenomenon.</p>
<p>Yes, Twitter&#8217;s rebranding has likely caused confusion among its users, with a search for &#8220;Twitter&#8221; on the App Store now offering up &#8216;X&#8217; as the first result, and &#8220;the bold, black listing is practically unrecognisable as the new evolution of Twitter,&#8221; as per the words of MashableIndia.</p>
<p><strong>More embarrassment</strong></p>
<p>As per Elon Musk&#8217;s latest statement, X’s block feature is now on the chopping block.</p>
<p>“Block is going to be deleted as a ‘feature’, except for DMs,” Musk stated on &#8216;X&#8217;, as for the tech billionaire, “it makes no sense.”</p>
<p>It is to be noted that the &#8216;block&#8217; feature is used to protect &#8216;X&#8217; users from online trolling, harassment and threats. If one goes through the community guidelines framed by Apple and Google app stores, both tech giants have made it mandatory for social networking apps to include a block feature.</p>
<p>Also as per the data collected by third-party researcher Travis Brown, Musk has over 153 million ‘X’ followers but most of these &#8216;followers&#8217; are fake and the count is being bloated by millions of new, inactive accounts.</p>
<p>Of the 153,209,283 X accounts following Musk, around 42% of his followers, or over 65.3 million users, have zero followers on their own accounts.</p>
<p>The average number of followers for all 153 million accounts following Elon Musk is just around 187, Brown&#8217;s data stated further, while only 453,000 Musk followers or 0.3% have subscribed to &#8216;X Premium.&#8217;</p>
<p>Over 72%, or nearly 112 million, of these users, following Musk, have less than 10 followers on their account, the data revealed further.</p>
<p>The Tesla CEO recently claimed that X now has over 540 million &#8220;monthly users&#8221;. However, the above finding may now present a huge question mark over this statement.</p>
<p>Meanwhile, &#8216;X&#8217; has fixed the bug that prevented the platform from displaying images tweeted before 2014.</p>
<p>&#8220;Over the weekend (August 19 and 20) we had a bug that prevented us from displaying images from before 2014. No images or data were lost. We fixed the bug, and the issue will be fully resolved in the coming days,&#8221; the company stated.</p>
<p>&#8216;X&#8217; users complained about their tweets published prior to December 2014 disappearing. Legendary American television personality Ellen DeGeneres stated on the micro-blogging platform that her famous 2014 Academy Awards ceremony selfie with celebrities like Bradley Cooper and Jennifer Lawrence also went missing from her tweet.</p>
<p><strong>X is suing its way out of accountability?</strong></p>
<p>In July 2023, Bloomberg reported about X losing advertisers, partly due to its lax enforcement against hate speech. The report carried inputs from Callum Hood, the head of research at the non-profit organisation called &#8216;Centre for Countering Digital Hate&#8217;. CCDH has been highlighting instances in which the micro-blogging platform has allowed violent, hateful, or misleading content to remain on its portal.</p>
<p>How did X reacted to this? It informed the media about filing a lawsuit against CCDH and the European Climate Foundation. The charge? &#8216;Misuse&#8217; of X data leading to the loss of the latter&#8217;s advertising revenue. X also alleged that the data used for the CCDH research was obtained using the login credentials from the European Climate Foundation, which had an account with the third-party social listening tool Brandwatch.</p>
<p>&#8220;Brandwatch has a licence to use Twitter’s data through its API. X alleges that the CCDH was not authorised to access the Twitter/X data. The suit also accuses the CCDH of scraping Twitter’s platform without proper authorisation, in violation of the company’s terms of service,&#8221; stated a WIRED article.</p>
<p>“The Centre for Countering Digital Hate’s research shows that hate and disinformation is spreading like wildfire on the platform under Musk’s ownership, and this lawsuit is a direct attempt to silence those efforts,” remarked CCDH CEO Imran Ahmed.</p>
<p>In December 2022, Elon Musk claimed that the hate speech on Twitter was down by a third, contrary to the claims of many of the former employees that the social media company was not doing enough to counter hate speech and misinformation.</p>
<p>When BBC did its investigation into Musk&#8217;s claims of hate speeches going down on X, they found out that banned accounts of polarising characters like Andrew Anglin, founder of the neo-Nazi Daily Stormer website, and Liz Crokin, one of the biggest propagators of the QAnon conspiracy theory, have been reinstated.</p>
<p>&#8220;Other lesser-known Twitter users have taken advantage of the new ownership. One account with a racial slur in its user name was able to get a blue checkmark. Another one was purchased by a neo-Nazi who tweets videos of himself reciting Mein Kampf &#8211; Hitler&#8217;s autobiography,&#8221; the report stated further.</p>
<p>&#8220;Our own reporting also provides some clues. The BBC analysed over 1,100 previously banned Twitter accounts that were reinstated under Mr Musk. A third appeared to violate Twitter&#8217;s own guidelines. [Violent] content was also a scourge on Twitter for years before Mr Musk acquired the platform,&#8221; it remarked.</p>
<p>In July 2023, X reinstated American rapper Kanye West after an almost eight-month ban for a series of offencive tweets, including the one showing a symbol combining a swastika and the Star of David.</p>
<p>X Corp lawyer Alex Spiro rejected CCDH&#8217;s allegations of Twitter &#8220;failing to act on 99%&#8221; of hateful messages from accounts with Twitter Blue subscriptions. He also criticised the organisation&#8217;s methodology, writing that &#8220;the article is little more than a series of inflammatory, misleading, and unsupported claims based on a cursory review of random tweets.&#8221;</p>
<p>Sprio didn&#8217;t stop there, as he alleged further that CCDH was supported by funding from &#8220;X Corp&#8217;s commercial competitors, as well as government entities and their affiliates&#8221;, thus accusing the non-profit of attempting to drive away advertisers. But, CCDH refused all these charges.</p>
<p>Experts told that the legal action was the latest move by social media platforms to shrink access to their data by researchers and civil society organisations.</p>
<p>“We&#8217;re talking about access not just for researchers or academics, but it could also potentially be extended to advocates and journalists and even policymakers,” says Liz Woolery, digital policy lead at PEN America, a non-profit that advocates for free expression.</p>
<p>“Without that kind of access, it is really difficult for us to engage in the research necessary to better understand the scope and scale of the problem that we face, of how social media is affecting our daily life, and make it better,” she stated further.</p>
<p><strong>Twitter following Meta example</strong></p>
<p>In 2021, Meta blocked researchers at New York University’s Ad Observatory from collecting data about political ads and COVID-19 misinformation. In 2022, the Mark Zuckerberg-led social media giant company announced winding down its monitoring tool CrowdTangle, which has been instrumental in allowing researchers and journalists to monitor Facebook.</p>
<p>Meta and X rivalries went to that extent a month back, when Elon Musk and Mark Zuckerberg challenged each other in a cage fight. However, the reality is that both ventures are suing Israeli data collection firm Bright Data, for scraping their sites to collect data. Meta itself contracted the same company to scrape the data of the social media company&#8217;s rivals.</p>
<p>&#8220;Musk announced in March that the company would begin charging USD 42,000 per month for its API, pricing out the vast majority of researchers and academics who have used it to study issues like disinformation and hate speech in more than 17,000 academic studies,&#8221; stated the article.</p>
<p><strong>Social media platforms vs researchers</strong></p>
<p>&#8220;For years, advocacy organisations have used examples of violative content on social platforms as a way to pressure advertisers to withdraw their support, forcing companies to address problems or change their policies. Without the underlying research into hate speech, disinformation, and other harmful content on social media, these organisations would have little ability to force companies to change,&#8221; WIRED noted.</p>
<p>In 2020, advertisers, including Starbucks, Patagonia, and Honda, left Facebook over the social media company&#8217;s lax approach against misinformation, particularly posts by former United States president Donald Trump.</p>
<p>After taking over Twitter in October 2022, Elon Musk fired many of the staffers responsible for keeping the micro-blogging platform safe from hate speeches and misinformation. The tech billionaire also reinstated the accounts of banned users like Trump and influencer Andrew Tate, currently indicted under human trafficking laws in Romania.</p>
<p>The University of Southern California’s Information Sciences Institute, along with Oregon State University, UCLA, and University of California Merced, released a study in 2023, which found that hate speech increased dramatically after Musk took over Twitter&#8217;s reigns. Also, the company saw its advertising revenue slashed in half as brands got concerned about their products appearing next to misinformation and hate speech.</p>
<p>The study got vindicated as in November 2022, Elon Musk tweeted, “Twitter has had a massive drop in revenue, due to activist groups pressuring advertisers, even though nothing has changed with content moderation and we did everything we could to appease the activists. Extremely messed up! They’re trying to destroy free speech in America.”</p>
<p>Woolery worried that the cost of fighting the lawsuits may intimidate research bodies and non-profits doing the work of exposing hate speeches and misinformation in the domain of social media.</p>
<p>“Lawsuits like this, especially when we are talking about a non-profit, are definitely seen as an attempt to silence critics. If a non-profit or another individual is not in a financial position where they can really, truly give it all it takes to defend themselves, then they run the risk of either having a poor defence or of simply settling and just trying to get out of it to avoid incurring further costs and reputational damage,” she stated further.</p>
<p><strong>A tough road ahead?</strong></p>
<p>&#8220;But the lawsuit doesn’t just put pressure on researchers themselves. It also highlights another avenue through which it now may be more difficult for advocates to access data: third-party social listening platforms. These companies access and analyse data from social platforms to allow their clients—from national security contractors to marketing agencies—to gain insights into their audiences and target messages,&#8221; stated the WIRED<br />
article.</p>
<p>Tal-Or Cohen Montemayor, founder and executive director of CyberWell, a non-profit tracking anti-Semitism online in both English and Arabic, stated that in November 2022, his company reached out to Talkwalker, a third-party social listening company, to get a subscription that would allow them to analyse anti-Semitic speech on the then Twitter.</p>
<p>Montemayor told that Talkwalker informed her that the company could not take the non-profit on as a client because of the nature of CyberWell’s work. Montemayor also suspected that “the existing open source tools and social listening tools are being reserved and paywalled only for advertisers and paid researchers. Non-profit organisations are actively being blocked from using these resources.”</p>
<p>&#8220;Talkwalker did not respond to a request for comment about whether its agreements with X prohibit it from taking on organisations doing hate speech monitoring as clients. X did not respond to questions about what parameters it sets for the kinds of customers that third-party social listening companies can take on,&#8221; the article pointed out.</p>
<p>X’s lawsuit against CCDH also cited a 2023 agreement between Brandwatch and X that outlined that any breach of the micro-blogging platform&#8217;s data via Brandwatch’s customers would be considered the responsibility of the social listening company.</p>
<p>Yoel Roth, X&#8217;s former senior director of trust and safety at Twitter, stated on BlueSky, “Brandwatch’s social listening business is entirely, completely, 100% dependant on Twitter data access, so I guess it’s not surprising to see how far backwards they’re bending to placate the company.”</p>
<p>A representative from another third-party social listening tool that uses X data, confirmed to WIRED that companies like theirs are heavily reliant on Twitter/X data.</p>
<p>“A lot of the services that are very Twitter-centric, a lot of them are 100% Twitter,” the anonymous source stated.</p>
<p>“In terms of data, Twitter continues to play a significant role in providing data to analytics companies,&#8221; the company added further, while noting that X’s new paid-for API has put the squeeze on third-party analytics companies, as losing access to the micro-blogging platform&#8217;s data will destroy these research companies.</p>
<p>The source even talked about specific “know your customer” guidelines prohibiting sharing X data with government agencies without prior permission.</p>
<p>After publishing a report on the increasing anti-Semitic content on Twitter since Musk’s takeover, London-based Institute for Strategic Dialogue (ISD), experienced a deluge of abusive tweets, with Musk himself taking potshots at the think-tank with a Tweet carrying poop emoji.</p>
<p>In December 2022, came out &#8216;Twitter Files&#8217;, a declassification of internal documents showing that pre-Musk Twitter had &#8216;silenced&#8217; conservative users on its platform. Some of these declassified documents even included the names and emails of disinformation researchers at the Stanford Internet Observatory. Some of these individuals have now reportedly become the targets of online hate.</p>
<p>Sasha Havlicek, cofounder and CEO of the Institute for Strategic Dialogue (ISD), now pins her hopes on the European Union’s Digital Services Act (DSA), which will eventually mandate access for researchers to data from large social platforms. And yes, such laws should be made mandatory in other parts of the world.</p>
<p>In light of the numerous challenges discussed, the trajectory of Elon Musk&#8217;s leadership and its ultimate outcome remains intriguing. Overcoming the array of obstacles will determine his success in steering the platform forward.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/x-man-elon-musks-apocalypse/">X-Man Elon Musk&#8217;s Apocalypse</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/x-man-elon-musks-apocalypse/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IF Insights: Twitter after a year of Elon Musk takeover</title>
		<link>https://internationalfinance.com/technology/twitter-after-year-elon-musk-takeover/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=twitter-after-year-elon-musk-takeover</link>
					<comments>https://internationalfinance.com/technology/twitter-after-year-elon-musk-takeover/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 02 Nov 2023 04:47:17 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[advertising]]></category>
		<category><![CDATA[Blue Checkmarks]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[Israel]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Twitter]]></category>
		<category><![CDATA[Twitter Ads]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=48453</guid>

					<description><![CDATA[<p>Elon Musk rebranded Twitter as ‘X’ and changed its strategy. X looks and feels like Twitter, but the more you use it, the more it's an approximation</p>
<p>The post <a href="https://internationalfinance.com/technology/twitter-after-year-elon-musk-takeover/">IF Insights: Twitter after a year of Elon Musk takeover</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A year ago, billionaire Elon Musk entered Twitter&#8217;s San Francisco headquarters with a white bathroom sink and a smile and dismissed its CEO and several top executives to change the social media network forever.</p>
<p>Elon Musk rebranded Twitter as ‘X’ and changed its strategy. X looks and feels like Twitter, but the more you use it, the more it&#8217;s an approximation.</p>
<p>He removed Twitter&#8217;s name, blue bird logo, verification system, and Trust and Safety Advisory Committee. In addition to relaxing the portal’s content control and hate speech policing, he laid off or lost most of its employees, including engineers who ran the site, moderators who kept it from being flooded with hate, and executives who made and enforced policies.</p>
<p>X lost many sponsors and consumers, and blamed the absence of the factors called ‘stability’ and ‘consistency’ in the social media platform’s decision-making last year. The portal has seen some of its advertisement revenue profits going for a toss. Twitter activity has dropped 30%, according to the Washington Post. The site&#8217;s bad content has driven advertisers away, Bloomberg News reporter Aisha Counts told CBS News.</p>
<p>Current estimates place the company&#8217;s value at USD 19 billion, which roughly translates to USD 45 per share. Elon Musk&#8217;s procurement of X in 2022 (Twitter then) amounted to a substantial USD 44 billion.</p>
<p>Aisha stated, &#8220;Advertising was down 60% in September. By all accounts, revenue is down, advertising is down — it doesn&#8217;t seem like a smart financial play.&#8221;</p>
<p><strong>Rebranding Nightmare</strong></p>
<p>Longtime Twitter users think the platform&#8217;s role as an imperfect but valuable place to learn about the world has ended. What X will become and whether Elon Musk can make it an &#8220;everything app&#8221; everyone uses is still uncertain a year later.</p>
<p>&#8220;Elon Musk hasn&#8217;t made a single meaningful platform improvement and is no closer to his &#8216;everything app,&#8217; than he was a year ago,&#8221; said Insider Intelligence analyst Jasmine Enberg, as she added, &#8220;Instead, X has driven away users, advertisers, and now it has lost its primary value proposition in the social media world: Being a central hub for news.&#8221;</p>
<p>Elon Musk was one of Twitter&#8217;s most active users before he bought the firm, thus his experience was different from regular users. However, he&#8217;s made many changes to X based on his own opinions of the site, even polling his millions of followers for ideas on how to run it.</p>
<p>Counts remarked, &#8220;I think he was tweeting at 1 a.m. this morning because he likes Twitter. The long-term vision is to turn it into an everything app or super app, which is adding payments, like shopping &#8230; but there&#8217;s a long road to get there.&#8221;</p>
<p>Turning the service into a tech corporation instead of a social network &#8220;has been the single largest cause of the demise of Twitter,&#8221; Enberg commented.</p>
<p><strong>Misinformation &amp; Blue Checkmarks</strong></p>
<p>Blue checkmarks that once indicated that an account&#8217;s owner was a celebrity, athlete, journalist from a global or local publication, or nonprofit agency now indicate that someone pays USD 8 a month for a subscription service that boosts their posts above unchecked users. These paying accounts disseminate falsehoods on the platform, which its algorithms amplify.</p>
<p>According to a Thursday analysis from the NGO Media Matters, many blue-checked X accounts with tens of thousands of followers called the October 2023 Maine mass shooting a &#8220;false flag,&#8221; orchestrated by the United States government.</p>
<p>Such accounts spread misinformation and propaganda about the ongoing Israel-Hamas war, so the European Commission made a formal, legally binding request for information from X over its handling of hate speech, misinformation, and violent terrorist content.</p>
<p>Famous foreign policy analyst Ian Bremmer wrote on X that the level of Israel-Hamas conflict disinformation &#8220;being algorithmically promoted&#8221; on the platform &#8220;is unlike anything I&#8217;ve ever been exposed to in my career as a political scientist.&#8221;</p>
<p><strong>Financial Woes</strong></p>
<p>It&#8217;s not just the platform&#8217;s identity at risk. Twitter was already struggling financially when Elon Musk bought it for USD 44 billion in October, 2022, and its plight is worse now. Tesla&#8217;s records are secret, but Elon Musk revealed in July 2023 that the business had lost half its advertising revenue and was in debt.</p>
<p>On July 14, he said, &#8220;We&#8217;re still negative cash flow,&#8221; citing a &#8220;50% drop in advertising revenue plus heavy debt load.&#8221;</p>
<p>&#8220;Need to reach positive cash flow before we have the luxury of anything else,&#8221; he stated.</p>
<p>Elon Musk hired Linda Yaccarino, a veteran NBC executive with significant advertising industry links, in May to bring back big sponsors, but the campaign has not achieved the desired results yet.</p>
<p>Despite a comeback in the internet advertising market that lifted Facebook parent firm Meta and Google parent company Alphabet&#8217;s quarterly profits, some marketers are spending less on X.</p>
<p>Insider Intelligence predicts USD 1.89 billion in advertising income for X, down 54% from 2022. This level of ad revenue was last reached in 2015 at USD 1.99 billion. It was USD 4.12 billion in 2022.</p>
<p>Studies also reveal X use is decreasing. Similarweb reported a 14% drop in worldwide web traffic to Twitter.com and a 16.5% drop in advertiser traffic. Mobile performance was 17.8% lower year-over-year based on iOS and Android monthly active users.</p>
<p>The immediately recognisable bluebird has been reduced to a mere X. Despite aggressive rebranding efforts, no one calls the app X. It is still referred to as Twitter by individuals or as &#8216;X (formerly known as Twitter)&#8217; by all major media houses.</p>
<p>Twitter had myriad imperfections, but the app was the space that netizens headed to when they wanted verified news from authentic sources. The USD 8 subscription has made it possible for trolls, bots, and other fraudulent actors to spread misinformation at an alarming rate. To turn the site into a free-speech paradise, Elon Musk has opened Pandora&#8217;s box of hate and misinformation.</p>
<p>Even on the financial side, Twitter has a fraction of the revenue it once pulled. Its valuations have hit rock bottom and at least 30% of the users have already steered clear of the app.</p>
<p>Twitter&#8217;s slow demise is attracting competition. Though Meta&#8217;s threads were unsuccessful in claiming X&#8217;s market it is a sure sign that others see weakness and want to capitalise on the situation. For now, Elon Musk better be careful that the failures of X don&#8217;t spill over to his other ventures!</p>
<p>The post <a href="https://internationalfinance.com/technology/twitter-after-year-elon-musk-takeover/">IF Insights: Twitter after a year of Elon Musk takeover</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/twitter-after-year-elon-musk-takeover/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>EU guideline targets tech giants monetising misinformation</title>
		<link>https://internationalfinance.com/technology/eu-guideline-targets-tech-giants-like-facebook-google-monetising-misinformation/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=eu-guideline-targets-tech-giants-like-facebook-google-monetising-misinformation</link>
					<comments>https://internationalfinance.com/technology/eu-guideline-targets-tech-giants-like-facebook-google-monetising-misinformation/#respond</comments>
		
		<dc:creator><![CDATA[WebAdmin]]></dc:creator>
		<pubDate>Thu, 27 May 2021 11:17:06 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[EU]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[new rules]]></category>
		<category><![CDATA[tech giants]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=41280</guid>

					<description><![CDATA[<p>The latest EU laws are much stricter and will push the tech giants not to make money from false information</p>
<p>The post <a href="https://internationalfinance.com/technology/eu-guideline-targets-tech-giants-like-facebook-google-monetising-misinformation/">EU guideline targets tech giants monetising misinformation</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>New and stricter guidelines from the EU will now push tech giants such as Facebook and Google to name a few so that they cannot earn money from advertising that is linked with misinformation. Recently, the European Commission announced that they have strengthened their already-existing guidelines. The present ones will monitor a robust framework and put a performance indicator in place that the firms will have to comply with. </p>
<p>The concerns about misinformation increased manifold especially after the Covid-19 pandemic hit, along with the claims of election fraud in the US. Some critics even pointed that social media and tech giants have directly contributed to it. EU industry chief Thierry Breton said in a statement, “Disinformation cannot remain a source of revenue. We need to see stronger commitments by online platforms, the entire advertising ecosystem and networks of fact-checkers.” </p>
<p>Vera Jourova, Commission Vice President for Values and Transparency added saying, “We need online platforms and other players to address the systemic risks of their services and algorithmic amplification, stop policing themselves alone and stop allowing to make money on disinformation, while fully preserving the freedom of speech.” </p>
<p>Introduced in 2018, the signatories to the code includevGoogle, Facebook, Twitter, Microsoft, Mozilla, TikTok, and some other advertisement and lobbying group. Reacting to the news, Facebook said, “We support the Commission’s focus on greater transparency for users and better collaboration both amongst platforms and across the advertising ecosystem.” The EU said it expects the signatories to come up with a detailed plan about how they aim to comply with the updated guidelines by the end of 2021 and how they plan to implement it early next year.</p>
<p>The post <a href="https://internationalfinance.com/technology/eu-guideline-targets-tech-giants-like-facebook-google-monetising-misinformation/">EU guideline targets tech giants monetising misinformation</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/technology/eu-guideline-targets-tech-giants-like-facebook-google-monetising-misinformation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Facebook removes 32 accounts after finding &#8216;sophisticated&#8217; efforts to disrupt US politics</title>
		<link>https://internationalfinance.com/in-the-news/facebook-removes-32-accounts-after-finding-sophisticated-efforts-to-disrupt-us-politics/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=facebook-removes-32-accounts-after-finding-sophisticated-efforts-to-disrupt-us-politics</link>
					<comments>https://internationalfinance.com/in-the-news/facebook-removes-32-accounts-after-finding-sophisticated-efforts-to-disrupt-us-politics/#respond</comments>
		
		<dc:creator><![CDATA[International Finance Desk]]></dc:creator>
		<pubDate>Wed, 01 Aug 2018 08:30:08 +0000</pubDate>
				<category><![CDATA[In the News]]></category>
		<category><![CDATA[dollars]]></category>
		<category><![CDATA[elections]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Fake News]]></category>
		<category><![CDATA[hacking]]></category>
		<category><![CDATA[Indictment]]></category>
		<category><![CDATA[Instagram]]></category>
		<category><![CDATA[investigation]]></category>
		<category><![CDATA[Misinformation]]></category>
		<category><![CDATA[Mueller]]></category>
		<category><![CDATA[network]]></category>
		<category><![CDATA[Russia]]></category>
		<category><![CDATA[VPN]]></category>
		<guid isPermaLink="false">https://www.internationalfinance.com/?p=19931</guid>

					<description><![CDATA[<p>The company stated in a blog post that it removed the accounts from Facebook and Instagram, after finding them involved in "coordinated" political behavior</p>
<p>The post <a href="https://internationalfinance.com/in-the-news/facebook-removes-32-accounts-after-finding-sophisticated-efforts-to-disrupt-us-politics/">Facebook removes 32 accounts after finding &#8216;sophisticated&#8217; efforts to disrupt US politics</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>While the company did not explicitly state that the effort was aimed at influencing the midterm elections during november—the timing of the suspicous activity was found to be consistent with such an attempt.</p>
<p>Facebook said early stages of the investiation have been initiated, and held briefings in the House and Senate this week.</p>
<p>While the exact nature of the party responsible for the behaviour remains unknown—it is very possible that it may be connected to Russia. The company said that it has found connections between the accounts it removed, and the accounts connected to Russia’s Internet Research Agency—that it had removed before and after the 2016 US presidential elections.</p>
<p>“Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,&#8221; said US Senator Mark Warner, in a statement to Fox News.</p>
<p>&#8220;I also expect Facebook, along with other platform companies, will continue to identify Russian troll activity and to work with Congress on updating our laws to better protect our democracy in the future,” he added.</p>
<p>Facebook also released a statement, which said: “It’s clear that whoever set up these accounts went to much greater lengths to obscure their true identities than the Russian-based Internet Research Agency (IRA) has in the past. We believe this could be partly due to changes we’ve made over the last year to make this kind of abuse much harder.”</p>
<p>The earliest page was found to be created in March 2017. Facebook revealed that more than 290,000 accounts followed at least one of the fake pages. The most followed Facebook Pages had names like &#8220;Aztlan Warriors,&#8221; &#8221;Black Elevation,&#8221; &#8221;Mindful Being,&#8221; and &#8220;Resisters.&#8221;</p>
<p>Facebook also said that the pages ran about 150 ads for $11,000 on Facebook and Instagram&#8211; paid for in U.S. and Canadian dollars. The first ad was created in April 2017, and the last was created in June 2018.The perpetrators also used virtual private networks (VPN) and internet phone services, and paid third parties to run ads on their behalf.</p>
<p>Facebook said that is partnership with the Atlantic Council helped it to identify the bad actors. One of the groups with roughly 4,000 members was located based on leads from US Special Counsel Robert Mueller&#8217;s recent indictment of 12 Russian nationals for their role in hacking and spreading misinformation during the 2016 US presidential election.</p>
<p>The post <a href="https://internationalfinance.com/in-the-news/facebook-removes-32-accounts-after-finding-sophisticated-efforts-to-disrupt-us-politics/">Facebook removes 32 accounts after finding &#8216;sophisticated&#8217; efforts to disrupt US politics</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/in-the-news/facebook-removes-32-accounts-after-finding-sophisticated-efforts-to-disrupt-us-politics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
