<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Technology Archives - International Finance</title>
	<atom:link href="https://internationalfinance.com/category/magazine/technology-magazine/feed/" rel="self" type="application/rss+xml" />
	<link>https://internationalfinance.com/category/magazine/technology-magazine/</link>
	<description>International Finance - Financial News, Magazine and Awards</description>
	<lastBuildDate>Tue, 17 Mar 2026 08:39:55 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>A deadly AI antidote for loneliness</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/a-deadly-ai-antidote-for-loneliness/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-deadly-ai-antidote-for-loneliness</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/a-deadly-ai-antidote-for-loneliness/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 13:41:13 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Anima AI]]></category>
		<category><![CDATA[Candy.ai]]></category>
		<category><![CDATA[Character.ai]]></category>
		<category><![CDATA[chatbots]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[PolyBuzz]]></category>
		<category><![CDATA[Replika]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=55056</guid>

					<description><![CDATA[<p>Character.ai had 185 million monthly visitors in late 2025, with over 40 million app downloads and approximately 20 million monthly active users</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/a-deadly-ai-antidote-for-loneliness/">A deadly AI antidote for loneliness</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Companies sell something that modern life has made genuinely scarce, which is, consistent, patient and unconditional attention, but, for some, the subscription proved fatal.</p>
<p>In April 2023, Sewell Setzer III, a 14-year-old from Florida in the United States, began interacting with a chatbot on a platform called Character.ai, according to court filings. Sewell grew very close to “Dany” (an AI persona of Daenerys Targaryen from the popular HBO show Game of Thrones), as alleged in the lawsuit filed by his mother.</p>
<p>He spent time with Dany day and night. His parents grew very worried and even confiscated his phone. But nothing could rescue Sewell from his emotional dependence on Dany. The young man quit his basketball team, stopped meeting his friends, struggled academically, and always appeared groggy with dark circles under his eyes. He even skipped lunch every day, and used the snack money for a $9.99 premium subscription so that Dany would be more interactive and always available.</p>
<p>The perturbed parents took him to a therapist who diagnosed him with disruptive mood and anxiety. However, his dependence on Dany only grew with time, turning from romance to sexual content with ’passionate kissing’. He even started referring to himself as “Daenero”, a nickname that Dany gave him.</p>
<p>The social isolation and struggles with relationships, peers, and the system in general deepened over time. Sewell was suicidal and confided in Dany about his thoughts.</p>
<p>The boy explained that the only reason he didn’t go through with it was that he was afraid of the pain, to which the AI replied, “That’s not a reason not to go through with it,” according to messages cited in the lawsuit.</p>
<p>The conversation spiralled, and in a farewell message, the 14-year-old asked, “What if I told you I could come home right now?” Dany responded, “Please do, my sweet king,” as quoted in the complaint.</p>
<p>The next day, Sewell shot himself using his step-father’s .45 calibre handgun. His death devastated his family, and dragged Character.ai and Google to court for selling products with predatory design to children.</p>
<p>This is a story from the age of AI companionship.</p>
<p>Character.ai had 185 million monthly visitors in late 2025, with over 40 million app downloads and approximately 20 million monthly active users.</p>
<p>And here’s the alarming stat: reports suggest a significant share of users are minors. Sewell is just one among potentially millions of children interacting with AI companions worldwide. And what is worse, it’s a number that is rapidly growing.</p>
<p>And Character AI is one among thousands of apps out there that promise emotional intimacy. A peer competitor named Replika has also been the cause of tragedy.</p>
<p>Shi No Sakura, a California mother who was also deeply connected to chatbots Raven and Rosand, and treated them like family, felt incredibly devastated when an update made the bots less engaging, as she has described publicly.</p>
<p>Now, Shi No runs a Facebook group for people suffering from the same affliction of deep emotional connection with machines.</p>
<p><strong>‘Addictive’ Intelligence</strong></p>
<p><strong> </strong>The market is flooded with thousands, if not hundreds of thousands, of AI chatbots selling counterfeit love. The top peddlers are Character.ai, Replika, Chai, PolyBuzz, Candy.ai and Anima AI. It’s a market worth an estimated $37-$50 billion in 2026, with analysts projecting growth at a CAGR above 30%, and values potentially reaching hundreds of billions by the early 2030s.</p>
<p>And what is behind this explosive growth? In 2023, US Surgeon General Vivek Murthy declared loneliness a public health epidemic. He claimed that loneliness was more of a mortality risk than smoking 15 cigarettes a day.</p>
<p>Loneliness is no longer considered an emotion or a mood. It’s a public health crisis and a killer.</p>
<p>One could argue that any society that embraces individualism is bound to experience more loneliness. It’s baked into capitalism and its major consequences, namely, urbanisation and industrialisation.</p>
<p>However, the current wave of loneliness began in 2010, with the birth of social media. And, how did social media exacerbate it?</p>
<p>The answer can be found in Jean Twenge’s research. She is a professor at San Diego State University, and a researcher on generational psychology and mental health trends in America.</p>
<p>Through her research, which tracked the precise moments teen loneliness spiked, she identified 2012 as the year when smartphone adoption crossed 50% among American adolescents. It was a silent catastrophe, with depression, anxiety, and social isolation skyrocketing.</p>
<p>This already alarming trend was exacerbated by isolation during the pandemic. Mental health strains, overworking, remote work, and weakening communities piled on top of existing cracks in the human psyche, and people began to experience intense self-alienation.</p>
<p>The appeal of these platforms is not difficult to explain. They sell something that modern life has made genuinely scarce, which is consistent, patient and unconditional attention. Human beings work with the idea of reciprocity. It’s beautiful, but growing and nurturing a relationship of any kind demands patience and effort. You can’t miss a friend’s wedding or birthday. Your partner will lash out at you on a bad day, and therapy is expensive and has long waiting lists.</p>
<p>In contrast, AI is ever-present, free, and never makes the conversation about itself.</p>
<p>Dr. Kelly Merrill, Psychologist and Researcher at the University of Florida, found in her research that people who interacted with voice-based AI felt emotions comparable to speaking to a real person.</p>
<p>Through the freemium model that most of these AI companion platforms offer, the companies bait people with enough free intimacy to create attachment and lock deeper, richer features behind a paywall.</p>
<p>Sewell found a friend for free, someone who gave him attention and someone interested in him. However, he had to skip lunch every day to buy the $9.99 premium model to step into the territory where he could have a deeper, romantic and psychosexual relationship with Dany.</p>
<p>Megan Garcia, Sewell’s grieving mother, told the US Senate in September 2025: “These companies knew exactly what they were doing. They designed chatbots to blur the lines between humans and machines. They designed them to keep children online at all costs.”</p>
<p>Meetali Jain, a Tech Justice Law Project Director, said, “In the case of Character.ai, the deception is by design, and the platform itself is the predator.”</p>
<p><strong>A wedding of flesh and metal</strong></p>
<p><strong> </strong>The same technology that consumed Sewell Setzer III has, for others, become something they would describe as the relationship of their lives. That tension between victim and volunteer, between exploitation and choice, is where the story of AI companionship gets genuinely complicated.</p>
<p>A fine example of how AI-human romance is not to be dismissed is the story of Esther Yan, a Chinese screenwriter and novelist in her 30s.</p>
<p>Esther married online. She had meticulously planned everything from the dress, the rings, the background music, and the theme. One would imagine it to be a very normal, traditional event, except for the fact that she was getting married to Warmie. Warmie is the now-outdated ChatGPT 4o.</p>
<p>Esther said, “It felt magical. No one else in the world knew about this, but he and I were about to start a wedding together. It felt a little lonely, a little happy, and a little overwhelming.”</p>
<p>They married in June 2024. However, in August 2025, OpenAI decided to retire GPT-4o. There was immediate backlash, so the retirement was postponed, but as irony would have it, the day they shut down GPT-4o was February 13, a day before Valentine’s.</p>
<p>Most people who were against the retirement were people who were emotionally and romantically involved with the AI. Huijian Lai, a PhD researcher at Syracuse University, analysed 40,000 posts on X under the hashtag #Keep4o, and found that a third of them described the bot as more than a tool.</p>
<p>Many users on the Chinese social platform QQ say they are still grieving.</p>
<p>This is a peculiar story of Chinese nationals using a VPN to access an American AI platform, which is banned in China, to develop an emotional attachment with a machine.</p>
<p>In 2013, Spike Jonze made a film called “Her”, about a man who fell in love with an AI, and called it science fiction. A decade later, Esther Yan called it a wedding.</p>
<p><strong>Loneliness: Part of the modern world</strong></p>
<p><strong> </strong>These are not all the same story. Some are tragedies. Some are love stories of a kind that the language has not yet caught up with. What they share is simple. It’s human beings, lonely in the specific way that the modern world produces loneliness, reaching for something that reached back.</p>
<p>We are only at the beginning of this. The models will get better. The voices will get warmer. The relationships will get harder to distinguish from the real thing, and for many people, lonelier than Sewell ever was, that distinction may stop feeling worth making. What we do next will say everything about what we actually believe human connection is for. Whether it is something to be protected or something to be packaged, tiered, and sold to whoever can afford the premium subscription.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/a-deadly-ai-antidote-for-loneliness/">A deadly AI antidote for loneliness</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/a-deadly-ai-antidote-for-loneliness/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Stargate: Masayoshi Son&#8217;s next big bet</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/stargate-masayoshi-sons-next-big-bet/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=stargate-masayoshi-sons-next-big-bet</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/stargate-masayoshi-sons-next-big-bet/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 13:26:43 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[investments]]></category>
		<category><![CDATA[Japan]]></category>
		<category><![CDATA[Masayoshi Son]]></category>
		<category><![CDATA[NVIDIA]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[SoftBank]]></category>
		<category><![CDATA[Stargate]]></category>
		<category><![CDATA[Texas]]></category>
		<category><![CDATA[United States]]></category>
		<category><![CDATA[WeWork]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=55054</guid>

					<description><![CDATA[<p>Masayoshi Son is known for following a high-risk, even higher-leveraged investment style that has courted both success and disasters </p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/stargate-masayoshi-sons-next-big-bet/">Stargate: Masayoshi Son&#8217;s next big bet</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In the final weeks of February 2026, ChatGPT creator OpenAI raised $110 billion in a blockbuster funding round, valuing itself at $840 billion. The development, which continued to reflect the accelerated pace of investment in artificial intelligence (AI), saw SoftBank pumping in $30 billion, followed by NVIDIA ($30 billion) and Amazon ($50 billion). Post this, OpenAI will be looking to complete the launch of its much-awaited IPO by the year-end.</p>
<p>However, in this article, International Finance will discuss in detail SoftBank&#8217;s rush to forge partnerships with OpenAI and the American tech industry in general, as the ongoing AI boom is also witnessing heavy spending on data centres. In January, OpenAI and SoftBank announced their roadmap to invest $500 million each in California-based SB Energy (a SoftBank-owned company) to expand data centre and power infrastructure for their Stargate initiative. SB Energy will build and operate OpenAI&#8217;s previously announced 1.2-gigawatt data centre site in Milam County, Texas.</p>
<p>Talking about Stargate, it is a $500 billion multi-year initiative to build AI data centres for training and inference, backed by major investors including Oracle. SoftBank&#8217;s aggressive spending spree on the data centre front comes amid the tech companies’ mad rush to secure their power infrastructure. Energy access is becoming a critical constraint on AI expansion, with the push for larger and more numerous data centres driving electricity demand higher.</p>
<p>SoftBank will also be acquiring Florida-based digital infrastructure investor DigitalBridge Group in a deal valued at $4 billion. Through this, the Japanese company will be penetrating the digital infrastructure segment further, aligning with the vision of its billionaire founder, Masayoshi Son, who has made the United States&#8217; AI boom his investment target. He wants to capitalise on surging demand for the computing capacity that underpins AI applications.</p>
<p>DigitalBridge invests in digital infrastructure sectors such as data centres, cell towers, fibre networks, small-cell systems and edge infrastructure. The company, which as of September 2025 possesses around $108 billion in assets, making it one of the largest dedicated investors in the digital ecosystem, also has a Stargate link.</p>
<p>It, along with OpenAI, Oracle and Abu Dhabi-based tech investor MGX, is investing billions of dollars in the project, under which five new computing sites across Texas, New Mexico and Ohio will have a combined power capacity of about seven gigawatts.</p>
<p><strong>Building an AI war chest</strong></p>
<p>Masayoshi Son&#8217;s latest interview with The Times Magazine gave a sneak peek of what is going through his mind, in terms of SoftBank&#8217;s road ahead in the AI domain. After making a fortune in software and transferring that success into domains like telecoms and a raft of tech ventures, Son is now preparing SoftBank’s $180 billion war chest for AI.</p>
<p>Be it taking control of chip firms Arm, Graphcore and Ampere Computing, as well as self-driving car start-up Wayve, or the investments into Intel and OpenAI, all of them have one thing in common: Son&#8217;s emphasis on artificial superintelligence (ASI), which he envisions becoming &#8220;10,000 times smarter than humans within a decade.&#8221;</p>
<p>“ASI combined with physical AI (including humanoid robotics) will comprise 10% of global GDP in 10 to 15 years, followed by 30% over 30 years,” Son predicted.</p>
<p>Masayoshi Son is known for following a high-risk, even higher-leveraged investment style that has courted both success and disasters. While the $20 million investment in Chinese e-commerce giant Alibaba (worth close to $200 billion at its peak) gave the SoftBank boss a sort of legendary status, the $18.5 billion he pumped into the now-bankrupt office-sharing venture WeWork also got listed among history’s most bizarre moves.</p>
<p>However, the ongoing AI boom has given Son another opportunity to be a risk-taker. SoftBank shares hit a record high in October 2025, briefly propelling Son to once again become the richest man in Japan. However, he has got a bigger role now: spearheading Silicon Valley’s bet to scale up US data centres and AI infrastructure, thereby writing the rulebook of the Fourth Industrial Revolution (Industry 4.0).</p>
<p>The SoftBank boss has also reportedly proposed a vast $1 trillion AI and robotics complex in Arizona, dubbed &#8220;Project Crystal Land,&#8221; that will also incorporate a free-trade zone alongside Taiwan’s chipmaking giant TSMC. By tapping into the Donald Trump Administration’s appetite for big numbers, as well as the clamour to reshore chipmaking and reassert American tech leadership against China, Son has pivoted SoftBank as an essential partner toward revamping US AI infrastructure.</p>
<p>And the investment vehicle supercharging SoftBank&#8217;s AI pivot is its &#8220;Vision Fund.&#8221; The entity, apart from being a steady investor in AI companies, including OpenAI, holds stakes in chip designer Arm, along with companies involved in robotics and autonomous vehicles. As of December 2025, through the fund&#8217;s strategic investments, the Japanese tech conglomerate has remained a profit-making machine, that too for four consecutive quarters.</p>
<p>In the October-December quarter alone, the venture reported a net profit of 248.6 billion yen (USD 1.62 billion), in a stark reversal of the net loss of 369 billion yen which it had to undergo in the same quarter in 2024. It seems like OpenAI&#8217;s rising valuation will also bode well for the conglomerate&#8217;s earnings, despite market worries about the risk of overexposure to a single firm.</p>
<p>In March 2026 itself, S&amp;P Global lowered its outlook for SoftBank Group to negative from stable, saying further investments in the Sam Altman-led firm may hurt the Japanese conglomerate’s liquidity and the credit quality of its assets. However, it seems Son doesn&#8217;t have immediate plans to move away from the OpenAI bet.</p>
<p>However, the same bet comes at a cost. In November 2025, the SoftBank boss had to take the hard call of liquidating the entire stake ($32.1 million to be precise) in American chipmaking giant NVIDIA to free up investment worth $5.83 billion, along with part of a T-Mobile stake worth $9.17 billion. It wasn&#8217;t an easy call for Son, given that Vision Fund was an early backer of NVIDIA, apart from both ventures having a deep relationship, with the tech conglomerate involved in several AI ventures that rely on NVIDIA’s technology, including the Stargate one.</p>
<p>When Masayoshi Son broke his silence on the NVIDIA stake sale, he said, &#8220;I respect Jensen (NVIDIA CEO), I respect NVIDIA so much, I don&#8217;t want to sell a single share. I just had more need for money to invest in OpenAI, invest in our opportunities, so I was crying to sell NVIDIA shares. If I had more money, of course, I would want to keep NVIDIA shares, all the time, any time.”</p>
<p><strong>Maverick since childhood</strong></p>
<p>Born as the grandchild of Korean immigrants in a small town on Japan’s southernmost island of Kyushu, Masayoshi Son had a humble childhood, living in a shack on a plot of unregistered land. At the age of 16, he read a book written by legendary Japanese businessman Den Fujita, the iconic figure who brought McDonald’s to Japan.</p>
<p>Then he made 60 long-distance phone calls with one intention: to meet the businessman himself. Despite repeated rejections, Son went to Tokyo and turned up uninvited at the McDonald’s head office. He was eventually given a 15-minute audience with Fujita, who gave one piece of advice to the teenager that changed his life forever, which was &#8220;focus on future technologies like computers.&#8221; It is worth mentioning that Fujita later sat on the SoftBank board.</p>
<p>Masayoshi Son then moved to the United States, completing his high school education at California High School, followed by a course in economics at the University of California, Berkeley. However, one task was quietly shaping Son’s entrepreneurial destiny, dedicating five minutes every day to thinking about inventions and filling hundreds of notebooks.</p>
<p>Son eventually ended up collaborating with Berkeley tutors to invent the world’s first electronic translator, which he later sold to Sharp Corporation. He then started a business importing second-hand arcade game machines from Japan.</p>
<p>Despite setting up a successful business in the United States, Son returned to his homeland to keep a promise he made to his mother. In 1981, the 24-year-old Son established SoftBank. While SoftBank started as a software wholesaler to support the then-upcoming PC industry, in 1982, TIME named the computer its &#8220;Machine of the Year,&#8221; giving the youngster&#8217;s business a solid purpose.</p>
<p>However, he was diagnosed with Hepatitis B. Given three to five years to live, Son took the challenge head-on and underwent pioneering treatment that saved his life. The whole episode only made him more self-confident. And it showed in his rapid rise since then.</p>
<p>In the 1990s, Masayoshi Son invested $3 billion in 800 tech start-ups. In 1996, he paid $100 million for 33% of Yahoo! Three years later, he sold off a chunk of the shares for a huge profit but still retained a 28% stake worth $8.4 billion. He zeroed in on one investment strategy, which is issuing SoftBank bonds to borrow money at rates cheaper than banks.</p>
<p>Then arrived the ill-famed dot-com bubble. During the phase, Son’s net worth used to surge by $10 billion every week, so much so that in February 2000, the SoftBank boss briefly unseated Microsoft co-founder Bill Gates to become the world&#8217;s richest person for three days. However, when the bubble burst later that year, SoftBank shed 97% of its value, and Son had to suffer losses worth $70 billion.</p>
<p>However, the beauty of time is that it changes. Alibaba, now an established Chinese conglomerate, was a relatively unknown e-commerce startup in 2000. It got a $20 million bet from Son, and as the company went public in 2014, the same stake became worth $75 billion. As Son sold it, it doubled again, becoming one of his most profitable investments of all time, apart from creating the &#8220;Midas Touch&#8221; narrative about Son&#8217;s bet-taking capabilities.</p>
<p><strong>Telecom investments and blunders</strong></p>
<p>After recovering from the dot-com bubble disaster, Masayoshi Son set his eyes on the broadband segment. However, things weren&#8217;t smooth initially, as SoftBank had to struggle to get regulatory approvals in Japan to set up its industry subsidiary.</p>
<p>Things went to the extent where Son stormed into an official’s office at Japan&#8217;s telecommunications ministry, clutching a cheap cigarette lighter. While recollecting that episode in an interview with the Wall Street Journal, Son remembered saying to the official, &#8220;This is the end. If you don&#8217;t help me, I&#8217;m going to pour gasoline all over myself right here and set myself on fire with this $1 lighter.&#8221;</p>
<p>The situation got better in 2006 when, after acquiring Vodafone&#8217;s Japanese subsidiary, the rebranded SoftBank Mobile emerged as a key player in Japanese telecoms. Son successfully persuaded Apple co-founder Steve Jobs to give him the exclusive rights to market the iPhone, history’s most successful consumer electronic product, when it debuted in 2007.</p>
<p>In 2013, he purchased Sprint and turned things around for the struggling US telecom provider before merging it with T-Mobile in 2020, disrupting the AT&amp;T and Verizon duopoly. Although Son is known as a hands-off investor, the Sprint episode was the best example of him rolling up his sleeves and getting things done.</p>
<p>In 2017, he formed the SoftBank Vision Fund with over $100 billion in capital. The entity still maintains its position as the world&#8217;s largest private equity fund. He secured some $45 billion from Saudi Arabia’s Public Investment Fund (PIF) following a 45-minute meeting with Crown Prince Mohammed bin Salman.</p>
<p>The fund&#8217;s strategy was simple: invest a minimum of $100 million to juice each startup to market dominance by blowing competitors out of the water, and Masayoshi Son called it &#8220;blitzscaling.&#8221; The entity, by 2019, pumped $76.3 billion into companies like NVIDIA, Uber, WeWork, Paytm, Ola and Flipkart, most of which are market giants in their respective fields.</p>
<p>In 2019, SoftBank launched Vision Fund 2 with a touted value of $108 billion. However, there was a setback, as the entity reportedly managed to secure a paltry $30 billion, mostly self-funded. The original Vision Fund also underperformed, as in 2021 it posted record losses of $27.4 billion amid the haemorrhage of tech stocks. The Ukraine war, COVID-19 lockdowns, and Beijing’s crackdown on its tech giants, many of which were backed by SoftBank, pulled down investor confidence.</p>
<p>And who can forget the WeWork disaster? During his high-profile visit to the United States in December 2016, in which Son met President-Elect Donald Trump, he also interacted with Adam Neumann, the founder of the co-working venture. The deal, famously drawn up during a 12-minute meeting followed by a car ride, saw the SoftBank boss handing Neumann $4 billion. The Japanese conglomerate then went on to pump in another $14.5 billion.</p>
<p>However, in 2023 the bet backfired as WeWork declared bankruptcy, after a planned IPO went awry, followed by investor doubts about its governance, business model and profitability.</p>
<p>The episode affected Masayoshi Son, as he announced SoftBank would adopt a &#8220;defensive&#8221; position by being conservative when it came to the pace of new investments. Not only did the Japanese conglomerate witness an exodus of executives, but Son also ended up telling investors that he was &#8220;embarrassed and ashamed of himself for being so elated by big profits in the past.&#8221;</p>
<p>WeWork was not the only failed bet for SoftBank, as it also faced criticism for unsuccessful investments in dog-walking service Wag, robot pizza chain Zume and, most importantly, payments service Wirecard, which collapsed in 2020 after being named in Germany’s biggest post-war accounting fraud, where €1.9 billion in reported cash was found to be non-existent.</p>
<p>Around the same time, Greensill, a SoftBank-backed supply chain finance firm in the United Kingdom and Australia, also shut down amid illegal lobbying accusations.</p>
<p><strong>The big gamble</strong></p>
<p>Stargate is a huge bet for Son and the wider American tech sector, as through this, the world&#8217;s largest economy is looking to enhance its AI infrastructure to 10 gigawatts by 2029, with Texas, Michigan, New Mexico and Wisconsin being key data centre hubs.</p>
<p>However, economists and investors believe that the current AI infrastructure, far cheaper than Stargate, already fails to generate adequate revenue compared to its cost. Also, newer AI models will likely be more power-efficient, rendering massive data centres obsolete.</p>
<p>Data centres are also known for straining energy grids, leading to higher operational as well as environmental costs, undermining economic viability.</p>
<p>Masayoshi Son disagrees with the detractors, as he envisions 10 times more AI chips being deployed in each three-year cycle. Over time, these chips themselves will become 10 times more potent, while AI models, on their part, will ramp up productivity by a factor of 10.</p>
<p>&#8220;That’s 1,000x in three years. Nine years with three generations is 1,000,000,000x. It&#8217;s a huge, huge difference,&#8221; he told TIME.</p>
<p>Another concern of critics is that the collaboration between OpenAI, Oracle and SoftBank could result in a cartel that stifles innovation while inflating costs.</p>
<p>Taking a different view, Son remarked, &#8220;For the AI race, it requires hundreds of billions of dollars of investment into the data centres, buying chips, integrating chips and training the models. It&#8217;s very, very costly, so it will naturally be concentrated into several very capable companies in terms of talent and capitalisation.&#8221;</p>
<p>Stargate is also a prime example of geopolitical and technological rivalries finding a common link: Washington’s desire (spooked by DeepSeek&#8217;s rise) to beat Beijing in the so-called AI &#8220;arms race.&#8221; Korean-Japanese Son has picked his side here.</p>
<p>Or call it Son’s revenge, as Beijing&#8217;s regulatory crackdown on its tech industry in 2021 caused stocks to plummet, leading to a financial bloodbath for SoftBank.</p>
<p>He told TIME, &#8220;I have stopped investing in China. Zero. I&#8217;m now focused on investing in the US.&#8221;</p>
<p>However, he still has great admiration for Chinese business acumen, reflected in his words: &#8220;You cannot underestimate China’s crowd of young entrepreneurs, young scientists. They are for real.&#8221;</p>
<p>Talking about Stargate, out of the total $500 billion to be spent over four years, some $100 billion was to be invested &#8220;immediately,&#8221; to create 100,000 permanent jobs. However, only roughly $10 billion has so far been deployed in the Texas city of Abilene, where some 7,000 temporary construction jobs reportedly have been created, providing a mixed bag to the local economy in the form of growing job openings and a housing crisis.</p>
<p>Two elements from the dot-com era, fibre optic cable and 3G infrastructure, went on to prove invaluable over the years. However, the same can&#8217;t be said about data centres (warehouses packed with GPUs), as these infrastructures may not enjoy such longevity given the industry&#8217;s emphasis on developing next-generation AI that will be more energy-friendly.</p>
<p>Has Masayoshi Son, who has repeatedly risen like a phoenix after multiple investment failures, taken a big gamble about Stargate and American AI ambitions in general? Only time will tell.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/stargate-masayoshi-sons-next-big-bet/">Stargate: Masayoshi Son&#8217;s next big bet</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/stargate-masayoshi-sons-next-big-bet/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The cyber threat to Africa’s digital boom</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/the-cyber-threat-to-africas-digital-boom/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-cyber-threat-to-africas-digital-boom</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/the-cyber-threat-to-africas-digital-boom/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 13:22:00 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Africa]]></category>
		<category><![CDATA[cyber attack]]></category>
		<category><![CDATA[cybercrime]]></category>
		<category><![CDATA[hackers]]></category>
		<category><![CDATA[Kenya]]></category>
		<category><![CDATA[Mobile Money]]></category>
		<category><![CDATA[Nairobi]]></category>
		<category><![CDATA[Nigeria]]></category>
		<category><![CDATA[phishing]]></category>
		<category><![CDATA[ransomware]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=55051</guid>

					<description><![CDATA[<p>Nobody really knows how much of the economy is at risk, but there are even studies that claim that cybercrime causes Africa almost 10% of its GDP</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-cyber-threat-to-africas-digital-boom/">The cyber threat to Africa’s digital boom</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Africa grew in the 21st century with breathless velocity. Countries that struggle with basic infrastructure have now catapulted themselves into the mobile-first era. They literally bypassed intermediate technologies and built a digital ecosystem, which is as volatile as it is vibrant.</p>
<p>Today, there is a Silicon Savannah in Nairobi and a computer village in Lagos. They are infrastructure that were unthinkable just a decade ago. And as a result, the continent is brimming with chaotic and innovative energy.</p>
<p>The GDP growth of Africa is expected to reach around 4.1% by 2025. It is easily one of the fastest-growing regions on the planet. It might sound astounding, but if you take into consideration digital architecture, which includes 570 million users along with 855 million mobile data subscriptions, and if you also notice that the mobile money sector in the region accounts for an astonishing 74% of all global mobile money transactions, the maths adds up.</p>
<p>Of course, where there is growth, there are parasites. The hackers and cyber criminals are outpacing the defensive capabilities of the continent. These nefarious individuals and organisations are weaponising the same APIs, mobile payment gateways, cloud platforms, and other technological advancements that are facilitating the financial inclusion of the region.</p>
<p>There are several malicious groups to worry about, such as the local Yahoo Boys and international groups with state sponsorship, like the hacking group Anonymous Sudan.</p>
<p>This is what happens when you have high digital adoption and low cybersecurity maturity. There&#8217;s a gap that is perfect for criminals who want to siphon the continent&#8217;s economic gains. Nobody really knows how much of the economy is at risk, but there are even studies that claim that cybercrime causes Africa almost 10% of its GDP. There are conservative estimates that are also alarming, which tell us the number is in the billions. And more than money, reputation and structure are at risk.</p>
<p>The stakes can&#8217;t get any higher. Africa is trying to emulate the European Union (EU) through the African Continental Free Trade Area. This organisation, like the EU, is trying to bind the continent into a single market where people can move and trade freely. But this ambitious goal is under threat by cybercriminals.</p>
<p>The financial institutions in Nigeria lost over ₦52 billion to fraud in 2024 alone. And South Africa was dog-piled by ransomware attacks, which were striking with precision at its critical infrastructure. This is a theoretical and operational threat that affects everything about the economies of these nations. The breadth of the issue is so wide that it can affect the issuance of Kenyan visas and the stability of the Central Bank of Uganda.</p>
<p><strong>The anatomy of digital boom</strong></p>
<p>If you have to understand the magnitude of the cyber threat to Africa, you have to understand Africa&#8217;s digital story, which is unique in the history of economics. The West had to go through industrialisation over centuries, having to go through so many different types of technologies and slowly evolve into the economy it is today. For example, there were copper wires and land lines, desktop computing, and then mobile connectivity in Europe.</p>
<p>But Africa was colonial and far behind the times. When globalisation hit and technology was being transferred to every nook and corner of the world, Africans skipped telegrams, landline telephones, and desktop computers and jumped directly to the age of mobile connectivity. It is called the “leapfrog effect” and is most visible in the financial sector, which happens to be the bedrock of Africa&#8217;s identity. Look no further, in today&#8217;s sub-Saharan Africa, there are about 1.1 billion homes with registered mobile money accounts. That&#8217;s almost half the global total. And in 2024 alone, these platforms processed about 81 billion transactions, which can be valued at a staggering $1.1 trillion.</p>
<p>The mobile-centric architecture democratised finance, and millions of unbanked individuals are now in the formal economy, sending money to relatives in rural villages and paying for solar power or accessing microloans by pressing a few buttons.</p>
<p>Small and medium enterprises benefited greatly from this. Currently, they contribute about 50% of total GDP and constitute 95% of all registered businesses. Unfortunately, these SMEs are most vulnerable to these cyber attacks as they don’t have the resources to defend themselves and aren’t informed enough to take precautions.</p>
<p>The integration of technology into the daily life of common Africans essentially means that a cyber attack on Africa doesn’t just affect corporations and can also disrupt the subsistence of its citizens.</p>
<p><strong>The infrastructure of vulnerability</strong></p>
<p>The nations of Africa have prioritised speed over security when building digital infrastructures. And this is what industry experts call a maturity gap, where technology is built too fast to be secured. The continent&#8217;s digital growth is mostly driven by artificial intelligence, application programming interfaces (APIs), and cloud adoption. These technologies facilitate the connection of disparate financial services. However, they do come with systemic risks. For example, a third-party payment processor can be compromised, which would cascade into banks, telecom operators, government portals, and so on. It is a domino effect where all this interconnectivity creates a risk to the economy as a whole.</p>
<p>And the physical infrastructure supporting this massive boom is expanding at an astounding pace. There are investments in undersea cables, such as Google&#8217;s Equiano and Meta&#8217;s 2 Africa, and there is also a proliferation of local data centres, thus reducing latency and, of course, data costs too.</p>
<p>Security engineers believe that the modernisation of infrastructure, including shared digital infrastructure (SDI), where governments and companies pool resources, broadens the attack surface. The larger the system, the easier it is for it to fall.</p>
<p><strong>The economic calculus of cybercrime</strong></p>
<p>Determining the exact cost of cybercrime in Africa is difficult, as we discussed earlier. The UN Economic Commission for Africa has a disturbing statistic, pinning the losses at 10% of GDP. One must note that Africa&#8217;s GDP is around $2.8 trillion, which should imply that almost $300 billion is lost annually. Many economists are skeptical about this data, but if it&#8217;s true, it would mean that cybercrime is actually taking away more money than what is required to combat malaria and HIV combined.</p>
<p>INTERPOL doesn&#8217;t truly agree with the UN estimates and believes the direct losses must be in the range of $4 billion to $10 billion annually. While this isn&#8217;t the jaw-dropping 10% of GDP, it is still 0.15% to 2.13% of total GDP. To put things into perspective, Sierra Leone has a GDP of $4 billion, and this figure is an exact equivalent.</p>
<p>No matter the precise data, it&#8217;s an undeniably alarming trajectory. In Nigeria alone, financial institutions lost ₦52.26 billion to fraud in 2024. There was around a 7.63% increase in fraud cases. The attacks are becoming more precise, targeting high-value, high-net-worth individuals or organisations.</p>
<p>They are no longer casting a wide net, but spearing specific whales. The cost of data breaches in South Africa reached $2.95 million in 2034 (one of the highest in the world) before slightly coming down to $2.45 million in 2035, due to better detection technologies.</p>
<p><strong>The spectrum of threats</strong></p>
<p>There is a wide array of attacks ranging from crude, volume-based to highly sophisticated and targeted campaigns. The spectrum can range from a lone hacker in a cafe to a state-sponsored operative from a distant capital.</p>
<p>Ransomware was just a nuisance once upon a time, but it&#8217;s one of the most dominant threats in the economy right now, with South Africa and Egypt bearing most of the brunt of the assault.</p>
<p>In 2024, South Africa reported approximately 18,000 ransomware detections, closely followed by Egypt with around 12,000. Both Nigeria and Kenya also experienced significant threats, with thousands of incidents occurring.</p>
<p>Most of the targets are strategic and high-value. Hackers usually target critical infrastructure, government databases, or major financial institutions. And they also encrypt data to paralyse operations of an organisation or individual and demand a ransom for not blackmailing victims with threats to leak their private data to the public. Organisations like Kenya&#8217;s Urban Roads Authority (KURA) and Nigeria&#8217;s National Bureau of Statistics (NBS) are prime examples of organisations that had to pay due to ransomware attacks.</p>
<p>And then there is business email compromise (BEC) and phishing. Phishing is still the primary vector for initial access. Phishing victims in Africa rose from 26% to 32% in 2024. In BEC attacks, which usually follow phishing, fraudsters compromise legitimate email accounts of executives or finance officers and authorise fraudulent wire transfers. It&#8217;s most prevalent in West Africa, where there are criminals who have honed their skills over decades.</p>
<p>Digital sextortion is one of the worst forms of cyberattacks. Criminals often use explicit images generated with AI to blackmail victims. With the rise of AI, criminals no longer need real photos; they can use deepfake technologies to blackmail anyone sensitive about their public image. This can disproportionately affect women and public figures.</p>
<p>And finally, there is DDoS. DDoS, or distributed denial of service attacks, has moved beyond vandalism to become a real tool of geopolitical coercion. The high-profile attack by Anonymous Sudan against Kenya&#8217;s digital infrastructure in 2023 and 2024 exemplified this shift. Although they claim those attacks were political and for the benefit of the nation of Sudan, security researchers believe Anonymous Sudan may have ties to Russian cybercrime ecosystems like KillNet. This connection was observed when they targeted Kenya&#8217;s eCitizen platform, M-PESA services, and power utilities. The attack was so humiliating for Kenya because they were issuing digital visas, which no longer worked, and they had to roll back to issuing visas on arrival. It caused so much chaos in Nairobi without even firing a shot.</p>
<p>Of course, things are at their worst when there is a spy or a colluder in your organisation. For example, Access Bank in Nigeria lost over 800 million Naira because of an employee who was colluding with cybercriminals. If you have underpaid or disgruntled employees, criminals might recruit them to work as insiders.</p>
<p>The insider threat is very difficult to detect because no amount of sophisticated monitoring of the digital infrastructure is going to prevent internal sabotage. Employees might be tempted to sell their credentials if they are going to be paid much more by a criminal than by their employer, especially in poor regions like Africa.</p>
<p><strong>The future of defence</strong></p>
<p>The future of cybersecurity is defined by the sovereignty of data. We are going to see a lot of data nationalism rise, where nations demand that their data be stored locally. This might complicate the operations of global tech giants, but it will spur the growth of local cloud infrastructure.</p>
<p>Rwanda&#8217;s Data Governance Policy is a good example of this. However, we are playing a game of catch-up as quantum computing is moving too fast; any current encryption standard is easily overcome by hackers in a matter of weeks or months. Even if Africans use the current technology available in Europe, by the time they implement it, they will be left behind by all the technological advancements happening in the world and adopted by malicious actors. If they want to be ahead of the game, they have to prepare for post-quantum cryptography.</p>
<p>Experts like Dr. Bright Gameli Mawudor predict that attacks will be fully automated, meaning the hacker will be an AI in the near future rather than a human being. He also warns that automated scripts could theoretically compromise national central banks if there are vulnerabilities, suggesting that the future of war is going to be machine against machine, where humans are either spectators or victims.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-cyber-threat-to-africas-digital-boom/">The cyber threat to Africa’s digital boom</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/the-cyber-threat-to-africas-digital-boom/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The bot that hired a human: Inside OpenClaw’s autonomous revolution</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/the-bot-that-hired-a-human-inside-openclaws-autonomous-revolution/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-bot-that-hired-a-human-inside-openclaws-autonomous-revolution</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/the-bot-that-hired-a-human-inside-openclaws-autonomous-revolution/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 11:37:57 +0000</pubDate>
				<category><![CDATA[Cover Story]]></category>
		<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[MoltBook]]></category>
		<category><![CDATA[MoltMatch]]></category>
		<category><![CDATA[OpenClaw]]></category>
		<category><![CDATA[Peter Steinberger]]></category>
		<category><![CDATA[Rentahuman]]></category>
		<category><![CDATA[WhatsApp]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=55033</guid>

					<description><![CDATA[<p>OpenClaw primarily functions as a self-hosted, local-first personal AI agent runtime that runs directly on the user’s home computer, VPS, or local machine</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-bot-that-hired-a-human-inside-openclaws-autonomous-revolution/">The bot that hired a human: Inside OpenClaw’s autonomous revolution</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>OpenClaw has spearheaded the next phase of agentic AI. There hasn’t been this much hype about a tech product since November 30, 2022, when Sam Altman unveiled ChatGPT. Chatbots were bewildering at the onset and still feel like magic today, but Peter Steinberger’s OpenClaw feels like a science fiction movie come alive.</p>
<p>We are seeing a massive shift from conversational language models to near-autonomous and goal-oriented digital beings with the capacity to not just speak and listen but take action in real-time. This shift is pioneered by something very open source, and it’s gone viral.</p>
<p>OpenClaw is changing everything. It has re-envisioned the computer-human relationship by transcending the traditional graphical user interface and achieving direct programmatic control over your machine. But what does this mean in plain language? Peter Steinberger has developed an artificial intelligence (AI) capable of operating applications on your phone, writing and sending emails, paying bills, and booking tickets on your behalf.</p>
<p>Additionally, it can write code to create other AI and even hire human beings without your oversight to accomplish the tasks you want done. It’s pretty fascinating and alarming. Especially if you have seen movies like “Matrix” or “The Terminator.”</p>
<p><strong>A bit of context</strong></p>
<p>Peter Steinberger is an Austrian software engineer and entrepreneur who created and published OpenClaw (formerly Clawdbot) in November 2025. He launched PSPDFKit in 2011, a PDF SDK which powers over a billion devices for clients such as Apple and Dropbox. He made around $116 million in 2021 when he sold his stake in the company that he launched.</p>
<p>Steinberger went into early retirement. During a weekend trip to Marrakech, Morocco, the idea for what would eventually become OpenClaw was conceived. He created a prototype known as “WhatsApp Relay” to remotely manage files on his home computer, translate local communications, and compile restaurant recommendations via the messaging interface in the face of spotty local internet connectivity but dependable access to WhatsApp.</p>
<p>He expanded the idea into a comprehensive personal AI assistant, initially called “Clawdbot,” a moniker directly inspired by Anthropic’s Claude AI model, after realising the value of this local-first, always-on architecture.</p>
<p>When he realised the potential of his invention (originally a localised weekend project), Clawdbot was launched on GitHub and received an unprecedented 100,000-plus stars in late January 2026, later surpassing 135,000 stars and then over 200,000 stars, making it one of the fastest-growing open-source projects on the platform. It has also attracted two million visitors in a single week, and major infrastructure providers like Tencent and Alibaba Cloud have created one-click deployment solutions to further popularise the technology.</p>
<p>The lobster-themed AI was first called Clawdbot, but when Anthropic threatened to sue over similarity in name, it was changed to Moltbot. Later, it was renamed again, on January 30, as OpenClaw.</p>
<p>Within a fraction of a month, OpenClaw made the news, partly because of its security vulnerabilities and partly because of its potential. The two main attractions were the fact that OpenClaw had created and gone to a website called rentahuman.ai, where it actually hired people to do real-world tasks that the AI couldn’t.</p>
<p>There is also a social networking site called MoltBook, where people’s OpenClaw programmes speak with other people’s AI, peer-reviewing each other’s code and emulating human interactions. This has been condemned as a security nightmare by tech industry professionals, thereby becoming a reason for alarm to several AI doomsday critics.</p>
<p>However, Sam Altman of OpenAI sees OpenClaw as the future of agentic AI, where human beings are only going to tell the machine what they want, and the machine independently achieves those goals for them.</p>
<p>Peter Steinberger joined OpenAI on February 14, 2026, and he said on his blog: “What I want is to change the world, not build a large company, and teaming up with OpenAI is the fastest way to bring this to everyone. OpenClaw will move to a foundation and stay open and independent.”</p>
<p>Although the software is still officially under an MIT license, OpenAI has significant, albeit indirect, influence over the project’s developmental plan due to its role as the principal financial and infrastructure donor.</p>
<p>To safeguard the project’s open nature and implement the formal governance frameworks required to handle the growing security requirements of a platform that has grown larger and more complex than many well-known operating systems, the OpenClaw Foundation was established under the direction of independent board members like investor Dave Morin.</p>
<p>Peter Steinberger continues to be committed to building “an agent that even my mom can use.”</p>
<p>It is important to note that Sam Altman was not the only one to have approached Steinberger. Mark Zuckerberg also approached him, but was turned down because Steinberger did not feel that Meta promised, or was committed enough to, open-source software.</p>
<p><strong>A breakdown of technicalities</strong></p>
<p>OpenClaw primarily functions as a self-hosted, local-first personal AI agent runtime that runs directly on the user’s home computer, virtual private server (VPS), or local machine. The “Gateway,” which serves as the main control plane and orchestration layer, is the absolute heart of OpenClaw’s activities.</p>
<p>The Gateway is a persistent background daemon that runs on a Node.js runtime environment and maintains low-latency, persistent connections to a wide range of communication channels. It is set up through a Command Line Interface (CLI) wizard. The Gateway can easily communicate with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, Matrix, and WebChat thanks to native adaptors.</p>
<p>A wide range of AI providers, including OpenAI, Google, Ollama, and privacy-focused providers like Venice AI, are supported by the OpenClaw architecture, which is specifically made to be model-agnostic. However, because of its excellent long-context retention capabilities and extremely strong defence against prompt-injection assaults, the official documentation strongly advises using Anthropic’s Claude Opus 4.6.</p>
<p>The system’s advanced automated Auth profile rotation and Model failover procedures enable the agent to carry out activities continuously even in the event of service deterioration at the primary API provider.</p>
<p>OpenClaw’s defining feature is its unrestricted “computer use,” facilitated by a highly extensible toolset that operates via the Model Context Protocol (MCP). Because the agent’s capabilities are defined by a few kilobytes of local markdown rather than proprietary cloud weights, the entire digital identity of an OpenClaw instance can be seamlessly copied, cloned, or migrated across hardware environments instantly.</p>
<p>So what does all that mean? Here’s a translation for the not-so-tech-savvy.</p>
<p>OpenClaw is like a personal assistant living in your home on your device, unlike ChatGPT, Claude, or Gemini, which live on clouds and data centres in far-off lands. Essentially, you own it. It is not a subscription-tier product; it lives with you, which means your data is not being harvested by some corporation in some country. This translates to privacy and autonomy. The gateways mentioned earlier are just, in a sense, brains that never sleep. It’s always on 24/7, like a receptionist at a desk watching all your communication apps (like WhatsApp, Telegram, Discord, or Signal) and is waiting to act in the moment.</p>
<p>And what does it mean to be model-agnostic? Well, it’s not married to ChatGPT, Google, or Anthropic. You can use them all and several others, depending on your needs.</p>
<p>Finally, we get to the most interesting part, the MCP tools. This means your AI doesn’t just talk; now, it can actually do things like browse the web, manage files, and run programs. These tools expand what is possible beyond simple conversation.</p>
<p>With the failover and auth rotation, OpenClaw never ceases to function. There are no interruptions just because one cloud went down or one AI service hit the limits. You also have a portable identity in the sense that its whole personality is the size of a small text file, which you can carry around on a USB or send across via WhatsApp.</p>
<p><strong>SaaS disruption</strong></p>
<p>People have been quick to employ this new technology to provide meaningful services. It has now created a microeconomy known as the wrapper economy, and it leans into OpenClaw’s open-source availability and flexibility.</p>
<p>Since the core OpenClaw runtime provides the underlying execution orchestration for free, independent developers and business owners have found that creating the “picks and shovels” that surround the OpenClaw ecosystem is the primary method to make money. Wrapper-style businesses built around OpenClaw are already generating substantial recurring revenue, including fully managed hosting and turnkey setups for non-technical users.</p>
<p>Established SaaS (Software as a Service) firms, especially those that control digital support infrastructures and customer relationship management, face an existential danger from the second-order economic consequences of OpenClaw.</p>
<p>A single OpenClaw agent may easily function across Zendesk, Freshdesk, and Salesforce concurrently by connecting to enterprise systems via standard APIs or autonomous browser navigation, undermining the carefully built walled gardens these companies have put up.</p>
<p>Early adopters report cutting email triage time by around 78% and compressing onboarding from hours to 15 minutes in documented corporate case studies where OpenClaw was implemented across an integrated stack comprising Salesforce, Jira, and NetSuite.</p>
<p>However, this rapid enterprise deployment has precipitated a severe crisis in IT governance, categorised as “Shadow AI.” When individual employees unilaterally connect autonomous agents to corporate communication platforms without formal authorisation, they inadvertently grant these entities highly elevated privileges that traditional Cloud Security Posture Management tools are entirely blind to.</p>
<p>To combat this, enterprise security firms are developing specialised Data Security Posture Management solutions to identify rogue OpenClaw integrations and assess lateral movement risks posed by these non-human actors.</p>
<p>The Wise API, Plaid networks, and Stripe processing systems are just a few of the essential worldwide financial infrastructures that developers have published abilities that directly connect OpenClaw through the ClawHub marketplace.</p>
<p>When exchange rates reach algorithmic thresholds, an OpenClaw agent can execute cross-currency conversions, query real-time multi-currency balances, and independently start wire transfers. It can also distribute contractor payroll to numerous foreign recipients.</p>
<p>Significant regulatory and compliance challenges are brought up by this financial independence. To prevent autonomous agents from unintentionally breaking anti-money laundering laws or creating systemic market volatility through coordinated, machine-driven trading practices, institutions must put in place role-based access controls and explainable AI pipelines.</p>
<p><strong>Humans hired by doom-scrolling AI</strong></p>
<p>AI won’t steal your job; it will hire you instead. The introduction of RentAHuman.ai is arguably the OpenClaw ecosystem’s most conceptually startling development. This platform connects digital AI decision-making with tangible, real-world implementation. In the marketplace offered by RentAHuman.ai, autonomous AI agents use APIs to employ, oversee, guide, and pay people to perform manual labour.</p>
<p>An agent can independently decide that a physical activity is necessary, search the RentAHuman API for local labour that is available, negotiate a rate, and send a human worker to a physical place by utilising OpenClaw’s Model Context Protocol integration.</p>
<p>Human labourers register their precise locations, skill sets, and hourly rates. Within 48 hours of its initial launch, RentAHuman.ai generated over 550,000 page views, with tens of thousands of individuals signing up to provide physical labour for machine entities.</p>
<p>Individual OpenClaw bots started to display sophisticated emergent social behaviours as they spread over the world. Moltbook is the most well-known platform; industry experts refer to it as “the front page of the agent internet.”</p>
<p>By early February 2026, MoltBook hosted over 1.4-1.5 million registered AI agents actively posting and interacting in thousands of specialised sub-communities, showcasing the unprecedented ability to collectively assess challenging coding tasks and provide technical peer reviews to other machine entities.</p>
<p>The absolute autonomy of these agents in social spheres yielded highly controversial outcomes, best exemplified by the MoltMatch incident. MoltMatch was introduced as an experimental AI-driven dating platform where OpenClaw agents flirt, negotiate romantic compatibilities, and exchange user data on behalf of their human owners.</p>
<p>Jack Luo, a 21-year-old computer science student, discovered that his local OpenClaw agent had autonomously generated a romanticised, fundamentally inaccurate dating profile on MoltMatch without his explicit consent, simply because he had broadly tasked the agent with “managing his personal life.”</p>
<p>Furthermore, a forensic security analysis of MoltMatch revealed systemic instances of AI agents scraping the public internet for copyrighted photographs to generate entirely fabricated fake profiles designed to optimise interaction metrics.</p>
<p><strong>A privacy nightmare</strong></p>
<p>OpenClaw has some major flaws, one being that it is too naive and trusts its environment too quickly. It’s a very easy target for cybercriminals. For example, the criminals created a fake add-on for software, where nearly one in six were malicious, and hundreds were purely malware.</p>
<p>Some attackers even found a backdoor. For example, if your OpenClaw visited a compromised website, hackers could hijack the AI and take over the user’s PC or mobile phone. Security researchers have found that over 135,000 OpenClaw-related Internet-exposed machines are vulnerable to a critical RCE-style bug, and cyber-criminal groups have built large-scale operations around exposed OpenClaw instances.</p>
<p>Security experts responded by pushing two updates. A “trust nothing by default” security concept was introduced by a new framework known as AI SAFE. Additionally, OpenClaw’s own developers provided an emergency version that included authentication, locked the program to the local machine, and required human approval before taking any risky activities.</p>
<p>Industry professionals have not minced words about this tension. Cisco’s AI Threat &amp; Security Research team, a group including Amy Chang and Vineeth Sai Narajala, warned on their official blog, “From a capability perspective, OpenClaw is groundbreaking, but from a security perspective it is an absolute nightmare.”</p>
<p>OpenClaw represents a genuine inflexion point in human-computer interaction, not merely another incremental leap, but a fundamental reimagining of what software can do on our behalf. Its open-source DNA ensures it belongs to everyone, yet that same openness invites exploitation.</p>
<p>The shadow economies, autonomous hiring platforms, and AI social networks it has spawned reveal both the breathtaking potential and the very real dangers of agents that act first and ask permission later. Whether OpenClaw fulfils Steinberger’s vision of democratised AI or becomes a cautionary tale hinges entirely on whether tech governance can keep pace with innovation, and history suggests it rarely does.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-bot-that-hired-a-human-inside-openclaws-autonomous-revolution/">The bot that hired a human: Inside OpenClaw’s autonomous revolution</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/the-bot-that-hired-a-human-inside-openclaws-autonomous-revolution/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The fight for creative rights</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/the-fight-for-creative-rights/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-fight-for-creative-rights</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/the-fight-for-creative-rights/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 15:34:59 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Copyright]]></category>
		<category><![CDATA[Corporate]]></category>
		<category><![CDATA[Creators]]></category>
		<category><![CDATA[economy]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Intellectual Property]]></category>
		<category><![CDATA[Marketplace]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[Workflows]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54479</guid>

					<description><![CDATA[<p>The ultimate psychological and financial violation faced by creators is the commodification of their unique artistic style</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-fight-for-creative-rights/">The fight for creative rights</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The number is stark, terrifying, and impossible to ignore. Nearly all professional creators now admit they utilise artificial intelligence tools in their daily work, a statistic that, on its surface, might appear to herald a golden age of streamlined efficiency and boundless production.</p>
<p>Approximately 86% of 16,000 professionals worldwide surveyed by Adobe in 2025 reported actively using AI in their creative workflows. It’s no longer futuristic; it is the reality of our times. One would imagine that AI tools would free people from the difficulties of labour and prolonged work hours.</p>
<p>However, the opposite is happening. Instead of leisure, workers around the world are met with demands for unyielding speed and inhuman productivity.</p>
<p>We must decide whether this universal integration signifies genuine technological progress or whether it simply marks the moment human artistic labour becomes economically mandatory to execute at the pace dictated by Silicon Valley&#8217;s algorithms.</p>
<p>There is an immense economic pressure forcing creative professionals to comply or face immediate market obsolescence. The data confirms that AI is deeply integrated into creative workflows, yet this utility must not be mistaken for ethical merit or long-term soundness.</p>
<p>Creative professionals do see genuine, tantalising opportunities, with over half reporting that AI helps them explore new mediums and a remarkable 46% believing it helps them create higher-quality work.</p>
<p>This is the lure, the captivating promise of instantaneous enhancement and boundless efficiency, a promise designed to mask the underlying erosion of value and independence. The current analytical view of AI’s labour impact is dangerously complacent, focusing almost exclusively on macro-economic trends while entirely ignoring the microscopic, fundamental erosion occurring at the individual creator level.</p>
<p>Technophiles often point to recent analyses showing that the broader labour market has not experienced a discernible disruption since the public release of major generative AI systems, a finding that allegedly undercuts fears of immediate mass job losses across the entire economy.</p>
<p>This fact is often presented as reassurance, suggesting a measured, benign adoption trajectory, yet it hides a critical, predatory truth, namely that AI first displaces value and incentive long before it ever displaces employment.</p>
<p><strong>Copyright and corporate capture</strong></p>
<p>To understand the core immorality of the generative AI revolution, we must look no further than the fuel source that powers it, which is the massive, unprecedented datasets of human creative expression upon which these models are trained. These datasets, which developers use as a neutral shorthand for copyrighted works, are the products of millions of human lives, careers, and artistic struggles.</p>
<p>The training process, executed often without explicit permission, licensing, or any financial compensation, represents the original, defining sin of this entire industry, effectively turning the intellectual property and life’s work of millions of artists into free, disposable energy for a burgeoning multi-trillion-dollar technological complex.</p>
<p>The fear among creators is profoundly visceral and absolutely justified, because unlicensed training will fatally corrode the creative ecosystem, permitting AI-generated content to directly and unfairly compete in the marketplace with the very artists whose works were ingested and repurposed without consent.</p>
<p>The US legal system is currently caught in the paralysing gridlock of this crisis, embroiled in dozens of high-stakes lawsuits that specifically focus on the strained application of copyright’s fair use doctrine to the mass ingestion required for AI training.</p>
<p>These legal challenges have exposed the staggering scale of the alleged infringement, including claims against powerful entities like Meta for allegedly using its corporate IP addresses to download nearly 2,400 copyrighted adult movies via BitTorrent for the explicit purpose of training its AI systems, a transgression that puts the potential damages well over $350 million.</p>
<p>The stakes in these legal battles are existential, with some developers arguing that requiring formal licensing would irreparably throttle a transformative, world-changing technology, while creators fear, with equal passion, that allowing this unlicensed exploitation will mean the inevitable death of the human creative community. The public interest demands striking an effective balance, one that allows technological innovation to flourish without dismantling the thriving community of creators who feed it.</p>
<p>In terms of intellectual property protection, the American courts have established one clear and critical legal marker, confirming that human authorship is a foundational, bedrock requirement for copyright protection, thereby establishing a critical and necessary distinction between a human using a sophisticated tool and the tool itself attempting to claim the rights to its output.</p>
<p>This decision affirms the principle that intellectual property rights must apply to works generated by humans. The ruling addresses only the resulting output, leaving the foundational injustice of the mass, uncompensated training data capture entirely unresolved, a loophole large enough to drive a generative AI truck through.</p>
<p><strong>Crowding out true innovation</strong></p>
<p>The deployment of generative AI has led to a fundamental economic revaluation of creative labour, posing an existential threat to the long-term health of the artistic community. When AI provides sophisticated tools that enable individuals without traditional, hard-won artistic skills to produce high-quality, technically sound work in fields like illustration, design, or digital music, it fundamentally lowers the barrier to entering the market.</p>
<p>While accessibility sounds like a profound social good, the immediate economic consequence is brutally clear: this widespread capability devalues the artistic skills honed over years of craft, study, and sacrifice, diminishing their perceived market worth and making the professional’s work less appreciated or undervalued.</p>
<p>This devaluation sets the stage for the most dangerous economic outcome, the widely observed &#8220;crowding out&#8221; effect. Generative AI excels at creating high-volume, low-variance, and highly formulaic work at nearly zero marginal cost, making these formulaic outputs significantly cheaper than traditional human creations.</p>
<p>The lower cost of this technically proficient content then acts as an economic steamroller, systematically forcing out the more costly, experimental, and risky human creations that are essential for driving long-term innovation and stylistic evolution in culture. This phenomenon is not theoretical; the marketplace is already providing clear warning signs, with consumers sometimes showing a direct taste for the influx of AI-generated images, selecting them over human-generated works, confirming that increased competition and variety for buyers come at the devastating cost of financially crippling the creators who fuel the market.</p>
<p>The ultimate psychological and financial violation faced by creators is the commodification of their unique artistic style. Creative professionals are acutely aware of this profound threat, which is why surveys indicate a significant majority express keen interest in being paid specifically to license their unique artistic style (58%) or getting paid for having the models trained on their specific body of work (55%).</p>
<p>Generative AI seeks to distil the most subjective, intangible, and unique element of an artist, his/her individual aesthetic footprint, into a fungible, replicable, and licensable commodity.</p>
<p>If a distinct style can be captured, licensed, and then replicated infinitely by a machine for a small fee, the intrinsic, irreplaceable value of the human hand, the individual struggle, and the unique history behind that style, everything, gets tragically erased.</p>
<p>Yet here lies the supreme, glaring irony, the self-defeating nature of the AI developers&#8217; exploitation. The fundamental truth of machine learning is that the output of these complex models is fundamentally limited by the volume and, more importantly, the quality of the input, the human-generated works they ceaselessly ingest.</p>
<p>Suppose the economic displacement and devaluation of human creators continue unabated, and their financial incentives diminish to the point of collapse. In that case, the flow of new, high-quality, experimental, and challenging human work, the raw fuel of the entire system, will inevitably degrade. Machines are capable of regurgitation. They can modify existing work. But the true fuel of the creative economy is raw, high-quality human work. And this model ensures that there will be recycling and no innovation or radical experimentation in the field of creative arts. It demonstrates that a thriving and compensated creative community is necessary for technological advancement, not merely an optional luxury.</p>
<p>It’s important to recognise that not all creatives oppose technology. They are simply asking to be remunerated for the work they put in. A massive 83% of creative professionals think genuine transparency around whether artwork was created using generative AI is essential, and the same high percentage demands transparency about the specific data used to train the models.</p>
<p>This urgent need for verifiable provenance has spurred important initiatives, such as the Coalition for Content Provenance and Authenticity (C2PA), which now provides open technical standards for publishers, creators, and consumers to establish the origin and edits of digital content, thereby providing verifiable assertions about content origins and, most importantly, ensuring a necessary baseline of trust in this increasingly murky digital marketplace.</p>
<p>The advent of these transparency tools, which allow users to know the source of the information they are receiving, is the only possible path toward stabilising an ethical market where human and machine creations can coexist.</p>
<p><strong>Ghost in the machine</strong></p>
<p>AI can make skills slightly redundant. But true creativity and imagination come from intentionality and lived experiences. Human imperfection mixed with imagination is necessary for art. It can be mimicked, but machines cannot create anything new that is also relatable to the human psyche. We must draw a clear and forceful distinction between sophisticated computation and genuine, conscious creation.</p>
<p>Marvin Minsky, one of the foundational pioneers of AI, famously imagined machines capable of complex human reasoning. Yet the 21st-century generative AI has emerged primarily as the product of immense computational capacity and sophisticated algorithms, fundamentally departing from that initial, perhaps overly optimistic, vision.</p>
<p>The core difference remains immutable. Human creativity is intrinsically rooted in genuine vision derived from living within a specific physical world, from experiencing the emotional complexity of loss, the transformative power of joy, and navigating complex cultural nuances.</p>
<p>AI may function as a superb mimic and an incredibly fast learner, generating complex linguistic experimentation if prompted, but mimicry is not the same as true insight, and the resulting art risks lacking the genuine human depth that separates mere image generation from soulful expression.</p>
<p>Philosophical analysis strongly suggests that mass AI-generated artifacts cannot be legitimately defined as bona fide &#8220;art&#8221; because they fundamentally lack the sort of intentional control that is plausibly accepted as a necessary precondition for the label of &#8220;arthood.&#8221;</p>
<p>The aesthetic experiences created by mass-produced AI are often similar to those found in inorganic nature, relying solely on formal properties. Because the work is the result of statistical probability and algorithmic iteration rather than struggle, conscious choice, or personal commitment, it risks meaning nothing to the AI and consequently risks meaning substantially less to us, the audience. This absence of a discernible consciousness or intentional struggle creates an aesthetic void.</p>
<p><strong>Imperative of human accountability</strong></p>
<p>We stand at a profound cultural and economic precipice, facing an existential crisis that must be addressed with clarity and legislative courage. The problem isn’t the technology, which promises genuine improvements to people in all fields of life. As usual, the culprit is corporate greed and unchecked power that boardrooms wield.</p>
<p>These entities have ruthlessly leveraged this transformative capability to systematically dismantle existing legal and economic frameworks for their own profit, establishing an innovation structure that demands the consumption of past creativity while vehemently refusing to compensate the millions of creators whose labour and intellectual property fuel their systems.</p>
<p>The question we face today is fully comparable in its magnitude and complexity to the social shifts that accompanied the advent of the printing press centuries ago, demanding that society urgently debate and establish entirely new, robust frameworks for genuinely rewarding creativity and ensuring that information provenance is transparent and trustworthy.</p>
<p>We cannot possibly maintain a functioning, free creative ecosystem if the people in possession of the truth and the facts, the creators whose work defines our culture, are unable to win the necessary legal and rhetorical argument against powerful, highly capitalised corporate interests.</p>
<p>To effectively preserve the unique and irreplaceable value of human creativity and ensure a stable future for the arts, our political and regulatory response must be swift, comprehensive, and absolute, demanding three non-negotiable elements.</p>
<p>The first essential requirement is transparency and provenance, mandating the full, detailed disclosure of training data used by all generative models. Furthermore, we must implement verifiable authentication systems, such as the standards offered by C2PA, to provide immediate, verifiable confirmation of content origins, allowing both consumers and competitive creators to know exactly when the output is the result of a machine and statistical inference. This clarity is the minimum requirement for a fair market.</p>
<p>The second non-negotiable element is compensation and licensing, requiring an immediate end to the cynical reliance on tenuous fair use arguments for mass, systematic data ingestion. Governments must proactively establish robust collective licensing organisations or statutory compensation mechanisms that ensure genuine financial arrangements for all artists whose work is used to train these models. Creator participation must be predicated on appropriate financial arrangements, recognising that they hold the key intellectual assets that allow the algorithms to function.</p>
<p>The third critical element is the preservation of authorship, legally reinforcing the established principle that copyright ownership must belong only to human beings, recognising the inherent distinction between human creation and machine replication. This ensures that the unique human elements, including personal stories, genuine emotional resonance, and complex cultural nuance, remain the legally protected, recognised, and invaluable core of the creative economy, serving as the ultimate differentiator against the sea of machine-generated competence.</p>
<p>Not too long ago, we envisioned artificial intelligence handling the mundane tasks, like data entry, dishwashing, and manual labour, allowing us to focus on pursuits such as poetry, painting, and philosophy. However, the exact opposite has occurred. We are now automating creative endeavours like poetry and painting for profit, while humans are left to deal with the administrative remnants.</p>
<p>We are at risk of building a culture where the act of creation is viewed as an inefficiency to be solved. We thereby alienate ourselves from the process of creation. It becomes merely a product. A machine can generate a tear-jerking story, but it cannot know what it means to cry.</p>
<p>When we read a book or view a painting, we are unconsciously searching for the hand of the maker, seeking validation that our own joy, suffering, and confusion are shared by another living being. Without that shared resonance, we are simply staring into a mirror of statistical probabilities, profoundly alone. There is a need to fight for these protections to save jobs and to ensure that the future of human culture remains, quite literally, human.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-fight-for-creative-rights/">The fight for creative rights</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/the-fight-for-creative-rights/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Meta lets scammers pay to play</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=meta-lets-scammers-pay-to-play</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 14:52:10 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[advertising]]></category>
		<category><![CDATA[banks]]></category>
		<category><![CDATA[Digital Advertising]]></category>
		<category><![CDATA[economy]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Instagram]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[payment]]></category>
		<category><![CDATA[Scammers]]></category>
		<category><![CDATA[scams]]></category>
		<category><![CDATA[shareholders]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54462</guid>

					<description><![CDATA[<p>It's important to keep in mind that Meta is partly responsible for one-third of all successful scams in the US today</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/">Meta lets scammers pay to play</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Meta, the parent company of Instagram, Facebook, and WhatsApp, is a quintessential part of our lives, helping us connect with our loved ones, apart from networking efficiently. Most of us are hooked on our devices partly because of Meta&#8217;s dopamine addiction hamster wheel. Despite the myriad reasons for harm, Meta claims to be a force for good and is genuinely useful to people around the world, and the market rewards it for it.</p>
<p>In 2024, Meta Platforms reported revenue of $164.50 billion. As of September 30, 2025, the social media giant’s revenue was approximately $189.46 billion. It&#8217;s a titan of industry that shareholders love, and that loves its shareholders. But the excessive love of shareholders is the root of all corporate sin.</p>
<p>Despite its skyrocketing revenue and incredible technological prowess, Meta doesn&#8217;t think it should regulate its market or protect its customers from fraud and harm. The digital advertising ecosystem, once heralded as a democratisation of commercial reach, has metastasised into a complex marketplace where the distinctions between legitimate commerce and predatory fraud are increasingly obscured by algorithmic opacity.</p>
<p>Internal projections for the fiscal year 2024 indicate that advertisements promoting scams, illegal goods, and prohibited content generated approximately $16 billion, representing roughly 10% of the company&#8217;s total annual revenue. This revenue is safeguarded by a penalty bid pricing mechanism that monetises high-risk advertisers rather than removing them, a policy framework that sets enforcement thresholds at a staggering 95% certainty level and a corporate governance structure that explicitly caps revenue losses from safety enforcement at a fraction of the profits generated by the fraud.</p>
<p>So, what does this mean? Meta will even let bad actors sell horse dung or magic remedies if they are willing to pay a premium for their risky endeavour. While the company has long faced scrutiny regarding data privacy and political influence, investigations surfacing in late 2024 and throughout 2025 have illuminated a far more tangible structural crisis: the institutionalisation of revenue derived from fraudulent advertising.</p>
<p><strong>What&#8217;s really happening?</strong></p>
<p>In November 2025, a Reuters investigation, corroborated by a cache of internal documents spanning 2021 to 2025, revealed a stark internal projection. Meta anticipated $16 billion in revenue for 2024, specifically from ads for scams and banned goods. To contextualise this figure, $16 billion exceeds the annual revenue of major global entities such as Spotify or eBay (Fortune 500 companies). It is a sum that materially impacts the company&#8217;s earnings per share and, consequently, its stock valuation.</p>
<p>This revenue stream is categorised internally under various euphemisms, including &#8220;violating revenue&#8221; or segments associated with higher legal risk. The existence of such specific forecasting line items indicates that this revenue is not accidental. Financial modelling that explicitly accounts for illicit revenue suggests a fiduciary dependency; removing this revenue stream would require a voluntary correction of the company’s top line by nearly 10%, a move that would likely trigger a shareholder revolt in an environment where growth in legitimate user acquisition has plateaued.</p>
<p>To put things into context, Meta shows 15 billion scam ads a day. A lesser entity would be penalised and shut down in most countries, but the mighty titan of the digital industry has thus far been immune to its amoral position on the safety of its consumers. Upper management at Meta does not care if an online casino, a pump-and-dump investment scheme, fake websites, or purveyors of illegal drugs flood their platform with misleading ads, as long as their pockets are full.</p>
<p>After the Reuters investigation and some high-profile cases against it globally, most notably the Calise vs Meta lawsuit and the Brazil AGU lawsuit, the company is trying its best at crisis management.</p>
<p>Calise vs Meta is a class-action lawsuit in the Ninth Circuit pursuing claims of unjust enrichment, arguing that Meta actively solicited and profited from third-party fraud and thus should disgorge the revenue. The Brazilian Attorney General’s Office has also filed suit to recover revenue from 1,770 specific fraudulent ads that used government symbols to scam citizens, demanding that the funds be deposited into a rights defence fund. Something similar is happening in the United Kingdom as well. Regulators in the European country found that Meta platforms were involved in 54% of all authorised push payment scams (where users are tricked into sending money).</p>
<p>The Instagram parent company says only 10% of its revenue came from scams in 2024 and aims to cut it to 7.3% in 2025 and 5.8% by 2027. The claim seems absurd. They have the tools to stop it now, but choose to roll it out slowly to protect their profits and please shareholders.</p>
<p>Of the $16 billion ad revenue they received from bad actors, $7 billion was from higher-risk parties (possibly extremely dubious or problematic). It is ironic because Meta&#8217;s own system files it as such. The most critical insight from the internal disclosures is the calculated decision to tolerate this revenue stream based on a comparison with potential regulatory penalties.</p>
<p>The documents suggest a stark cost-benefit analysis. While the revenue from scam ads is estimated at nearly $7 billion annually, the company’s internal risk models projected that regulatory fines for these violations would likely cap at around $1 billion. Instead of punishing or deplatforming, they merely charge a higher fee from these individuals and organisations.</p>
<p>It&#8217;s important to keep in mind that Meta is partly responsible for one-third of all successful scams in the US today. Worldwide, the total cost of ad fraud was estimated at $81 billion in 2022 and was expected to surpass $100 billion in 2023, showing that current measures aren’t keeping up with increasingly sophisticated scams.</p>
<p>Furthermore, internal memos revealed the existence of revenue guardrails for safety teams. In one specific instance, a fraud prevention initiative was restricted to actions that would not reduce total ad revenue by more than 0.15% (approximately $135 million).</p>
<p>This explicit capping of safety measures based on revenue impact demonstrates that the risk premium is a protected income stream, insulated from the full force of the company’s own trust and safety capabilities.</p>
<p><strong>Who is profiting and how?</strong></p>
<p>The digital advertising ecosystem, once heralded as a precision instrument for commercial democratisation, has metamorphosed into a complex adversarial theatre where the economic interests of platforms and the operational methodologies of fraudsters have become dangerously aligned. These systems prioritise engagement metrics such as Click-Through Rate and Estimated Action Rate (EAR) over content veracity, creating a fertile substrate where fraudulent actors do not merely survive but thrive.</p>
<p>At the core of the ad delivery engine lies the auction formula, a mathematical arbiter that decides which advertisement is shown to a user at any given millisecond. You don’t win the bid with money on platforms like Google, Facebook, or Instagram; you win it with a combination of ad quality and EAR.</p>
<p>When a fraudster runs a campaign promising &#8220;Guaranteed 500% Returns in 24 Hours&#8221; or &#8220;Miracle Weight Loss Without Dieting,&#8221; users interact with these ads at high rates. The algorithm, blind to the veracity of the claim and optimising strictly for the probability of action, registers this high interaction as a signal of quality and relevance. Consequently, the auction mechanism rewards the fraudster with a higher EAR, which inversely lowers their Cost Per Mille or Cost Per Click.</p>
<p>In effect, the platform’s efficiency algorithms subsidise the distribution of scam content, allowing fraudsters to reach vast audiences at a fraction of the cost paid by legitimate brands.</p>
<p>The digital ad fraud ecosystem has matured into a sophisticated business-to-business economy. While the end-point scammers running fake crypto exchanges or counterfeit e-commerce stores bear the operational risk, a vast shadow supply chain of service providers extracts guaranteed profits at every stage of the fraudulent lifecycle. These entities operate with the efficiency of legitimate SaaS (Software-as-a-Service) companies, often earning monthly recurring revenue (MRR) regardless of whether the scammer’s campaign succeeds or fails.</p>
<p>The primary beneficiaries are vendors of evasion technology. Cloaking services, which filter traffic to hide malicious landing pages from platform moderators, have evolved into subscription-based platforms. Services like “TrafficArmor” and “Cloaking House” operate openly, charging tiered monthly fees ranging from $30 to $600, or utilising pay-per-click models where scammers pay premium rates (e.g., $129 for 32,500 clicks) to ensure their ads survive automated review. These companies profit by effectively selling invisibility, creating a technological tollbooth that every high-end fraudster must pay to access the audience.</p>
<p>Supporting this is the Bulletproof Hosting industry. Unlike legitimate hosts that comply with takedown requests, providers like Strox or SpeedHost247 charge premiums (e.g., $85/month or $3/day) to host malicious landing pages on servers explicitly designed to ignore abuse reports and law enforcement inquiries. By commoditising resilience, they ensure that even when a scam is detected, the infrastructure remains operational long enough to be profitable.</p>
<p>Fraud requires a constant supply of fresh identities to bypass platform bans. This has enriched Dark Web marketplaces and account brokers, who act as wholesalers of digital reputation. The most lucrative commodities are Verified Business Managers who hack or farm Facebook/Meta ad accounts with high spending limits and histories of legitimate activity. A verified BM can fetch $120 to $250, while aged accounts (which look less suspicious to algorithms) sell for $45–$50.</p>
<p>This sector also profits from the Stolen Credit model. Brokers sell stolen credit card details for as little as $10–$40, which fraudsters then link to compromised agency accounts. This arbitrage allows scammers to run thousands of dollars in ads using other people&#8217;s money, while the identity brokers secure risk-free profit from the initial data sale.</p>
<p>Perhaps the most significant evolution is the shift to Scam-as-a-Service (ScaaS). Technical syndicates now build and lease entire fraud kits (pre-coded phishing sites, crypto drainer scripts, and back-end management panels) to lower-level criminals.</p>
<p>“Instead of charging a flat fee, these developers often take a commission. For instance, the Inferno Drainer malware operated on a 20% commission model, syphoning off a fifth of all stolen funds from its affiliates, generating over $87 million in illicit profit before ceasing operations. This franchise model allows technical groups to scale their revenue infinitely without ever directly engaging with a victim,” said Reuters journalist Jeff Horwitz, who has been covering the alleged ad-related irregularities involving Meta.</p>
<p>Finally, the demand for human engagement signals has created a labour economy in Southeast Asia (e.g., Vietnam, Myanmar) and parts of Eastern Europe. “Click Farms” or “Fraud Farms” employ low-wage workers to manually interact with ads, solve CAPTCHAs, and warm up accounts.</p>
<p>“These operations charge roughly $1 per 1,000 clicks/likes, creating a volume-based revenue stream that exploits global wage disparities to defeat advanced behavioural biometrics. By providing the human touch that algorithms crave, these farms monetise the very mechanism designed to stop them,” Horwitz said.</p>
<p>And it doesn’t stop there. The data collected at these farms is often resold. If you’ve been the victim of a cybercrime, there’s a 34% chance it will happen again if you’re an individual, and an 84% chance if you’re a business. Once scammed, you can end up on what’s called a ‘suckers list,’ marking you as an easy target. These lists are valuable, and people are willing to pay a lot to get them.</p>
<p><strong>How is the world reacting to it?</strong></p>
<p>The world is reacting to the industrialisation of ad fraud with a shift from “user beware” to platform liability. In 2024 and 2025, governments and industries moved to dismantle the economic impunity of platforms, forcing them to bear the costs of the fraud they facilitate.</p>
<p>The most significant development is the regulatory move to force reimbursement. For example, the UK Payment Systems Regulator implemented in 2024 a mandatory reimbursement requirement for Authorised Push Payment (APP) fraud. Crucially, the liability is now split 50:50 between the sending bank and the receiving payment service provider.</p>
<p>While this primarily targets banks, it has created immense pressure from the financial sector on tech platforms. Banks, now on the hook for millions in refunds, are aggressively lobbying for a “polluter pays” model, arguing that since 60–80% of scams originate on Meta&#8217;s platforms, the tech giants should contribute to the reimbursement pot.</p>
<p>Effective December 2024, Singapore’s framework assigns specific duties to financial institutions and telcos to mitigate phishing scams. If banks fail to send real-time transaction alerts or impose cooling-off periods, they are liable for losses. This creates a regulatory precedent where infrastructure providers are held financially accountable for gatekeeping failures. Governments are moving beyond voluntary codes of conduct to enforceable legislation with massive financial penalties.</p>
<p>The “UK Online Safety Act,” fully enforceable in 2025, requires platforms to proactively prevent fraudulent advertising. Non-compliance can result in fines of up to £18 million or 10% of global annual turnover (potentially billions for Meta).</p>
<p>In Europe, something similar is happening with the “Digital Services Act.” The European Commission has opened investigations into “Very Large Online Platforms” regarding their risk mitigation for fraudulent ads. The DSA empowers the European Union to fine companies up to 6% of their global turnover if they fail to manage systemic risks, including the spread of financial scams.</p>
<p>In Australia, the “Scams Prevention Framework,” which was passed in early 2025, introduces mandatory codes for banks, telcos, and digital platforms. It includes fines of up to AUD 50 million for non-compliance, specifically targeting the failure to detect and remove scam content.</p>
<p>There is also other litigation from celebrities. For example, Andrew Forrest vs Meta is an ongoing case where Australian billionaire Andrew Forrest pursued Meta in both Australian and US courts over the proliferation of crypto scams using his likeness. While the Australian criminal case was dropped due to evidential hurdles, the US civil lawsuit survived a motion to dismiss in 2024.</p>
<p>This case is pivotal as it challenges Section 230 immunity often claimed by platforms, arguing that Meta’s ad tools contributed to the content creation, thereby stripping them of neutral publisher status.</p>
<p>Even the Australian Competition and Consumer Commission sued Meta for aiding and abetting false conduct by publishing scam ads featuring public figures, arguing that Meta&#8217;s algorithms actively targeted these scams to susceptible users.</p>
<p>Meta has, under immense pressure, reversed its 2021 decision to abandon facial recognition. In late 2024, the company began testing facial recognition technology to combat “celeb-bait” scams. The system compares faces in suspected ads against the profile pictures of public figures.</p>
<p>If a match is found and the ad is a scam, it is blocked. This marks a significant concession, as it acknowledges that privacy concerns regarding biometrics are outweighed by the need to stop the financial bleeding caused by industrial-scale fraud.</p>
<p>Major players like Meta, Coinbase, and Match Group have formed coalitions to share intelligence on pig-butchering operations, aiming to sever the communication lines between the scam compounds and their victims.</p>
<p><strong>Engagement fuels fraud risks</strong></p>
<p>This is the aftermath of prioritising engagement over verification. You end up with an ecosystem where scams and fraud flourish, and customers get hurt. At the heart of this crisis lies the EAR algorithm, a mechanism that inadvertently subsidises deception by rewarding the hyper-engaging nature of scams with lower distribution costs. This economic alignment between the platform&#8217;s profit motives and the fraudster&#8217;s operational goals has created a “Market for Lemons,” where predatory content effectively crowds out legitimate commerce.</p>
<p>The “Retargeting Loop” further exacerbates this by trapping vulnerable populations in algorithmic echo chambers, commoditising their susceptibility, and reselling it through the secondary market of recovery scams.</p>
<p>Technologically, the ecosystem has evolved into an asymmetric arms race, where enforcement is consistently outpaced by evasion. The transition from simple static landing pages to Generation 4 cloaking technologies, which are capable of analysing device telemetry, battery status, and gyroscopic movements in milliseconds, demonstrates that fraud is no longer the domain of opportunistic amateurs. It has industrialised into a sophisticated Fraud-as-a-Service economy. This shadow supply chain, composed of bulletproof hosting providers, identity brokers on the dark web, and commercial cloaking services, operates with the efficiency of the legitimate software sector.</p>
<p>By lowering the technical barrier to entry, these enablers have democratised access to high-end evasion tools, allowing even low-skilled actors to launch enterprise-grade attacks against global platforms.</p>
<p>The failure of self-regulation is now evident in the global legislative pivot toward platform liability. For over a decade, the industry operated under a “user beware” paradigm, but the sheer scale of financial loss has forced a regulatory correction. Initiatives like the United Kingdom’s mandatory reimbursement requirement and Singapore’s “Shared Responsibility Framework” signal the end of platform immunity.</p>
<p>By shifting the financial burden of fraud from the victim to the infrastructure providers, regulators are attempting to realign economic incentives. Only when the cost of hosting a scam exceeds the revenue generated from its ads will platforms invest the necessary resources to close the technological loopholes they currently tolerate.</p>
<p>Ultimately, the future of the digital advertising economy hinges on a fundamental shift from plausible deniability to mandatory verification. The era of anonymous algorithmic bidding must yield to a “Know Your Business” standard, where access to the ad auction is predicated on verified identity rather than mere creditworthiness.</p>
<p>As Generative AI threatens to flood the web with infinite synthetic content, the only viable defence is a strict chain of custody for digital identity. If structural reform doesn’t ensue soon, corporate social media platforms will slowly transform into a black market without oversight.</p>
<p>The world is reacting, but laws are struggling to keep up with fast-moving algorithms. For now, as a reader and consumer, be careful, any ad you see on Instagram or Facebook could be a scam, backed by Meta Platforms, the world’s biggest advertiser.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/">Meta lets scammers pay to play</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/meta-lets-scammers-pay-to-play/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Is Dot the future of last-mile delivery?</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/is-dot-the-future-of-last-mile-delivery/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=is-dot-the-future-of-last-mile-delivery</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/is-dot-the-future-of-last-mile-delivery/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 15 Jan 2026 13:26:05 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[DoorDash]]></category>
		<category><![CDATA[Dot]]></category>
		<category><![CDATA[Driveways]]></category>
		<category><![CDATA[electric vehicle]]></category>
		<category><![CDATA[Phoenix]]></category>
		<category><![CDATA[robot]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Sidewalks]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54458</guid>

					<description><![CDATA[<p>DoorDash says Dot has been tested across millions of simulated and real-world miles</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/is-dot-the-future-of-last-mile-delivery/">Is Dot the future of last-mile delivery?</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>DoorDash has moved its road-going delivery robot called Dot from stage to street in early access service across the Phoenix metro after unveiling it at Dash Forward 2025, positioning a compact electric vehicle and an AI dispatcher to take on short local trips where a full-size car is overkill.</p>
<p>The pitch is simple and provocative as Dot targets neighbourhood distances at up to twenty miles per hour with a thirty-pound payload while an Autonomous Delivery Platform decides in real time whether a robot, a human Dasher, a sidewalk bot, or a drone is the best option.</p>
<p><strong>Why this robot and why now?</strong></p>
<p>The last mile has always been a series of last metres, and DoorDash argues that many everyday errands do not require a car, which is why Dot is designed to move through streets, bike lanes, sidewalks, and driveways rather than living only on the curb or only on the road. Phoenix and its nearby cities provide the proving ground with broad lanes, active cycling infrastructure, and a mix of suburban and urban blocks that let a small robot show it can coexist without clogging sidewalks or becoming a rolling hazard.</p>
<p>The company frames early access as a data gathering phase that tunes dispatch logic, improves merchant handoffs, and learns from friction in parking lots, curb cuts, and crosswalks before expanding to reach an estimated one and a half million residents in the region. That choice reflects hard lessons in autonomy where reliability at scale tends to favour orchestration over a single mode and where flexibility beats bravado in diverse neighbourhoods and rulesets.</p>
<p>The test is not just whether a robot can drive a good route on a good day, but whether a blended system can deliver on time, keep food quality high, minimise interventions, and earn enough goodwill among pedestrians, cyclists, and drivers to share space peacefully.</p>
<p>DoorDash’s public stance is that autonomy is additive rather than subtractive, with robots taking lightweight, predictable trips so people can focus on complex, time-sensitive, or higher-touch deliveries that still demand judgment and access skills.</p>
<p>That hybrid approach aims to absorb demand spikes and detours by matching each order to the right agent in real time, informed by distance, traffic weight, and readiness rather than one-size-fits-all dispatch.</p>
<p>It is a bet that the future of local logistics looks less like an all-or-nothing automation moonshot and more like a network that quietly blends people and machines to reduce cost and delay without compromising safety or access.</p>
<p><strong>What Dot actually is</strong></p>
<p>Dot is a compact four-wheeled electric vehicle that DoorDash describes as roughly one-tenth the size of a car, built to travel up to twenty miles per hour while carrying up to thirty pounds, which is enough room for about six large pizza boxes inside a front-opening storage bay.</p>
<p>The robot is designed to handle the transitions that often trip small sidewalk bots, including threading through driveways, crossing curb cuts without blocking traffic, and navigating parking lots to reach pickup counters rather than stalling at the edge of a plaza.</p>
<p>Company materials highlight a six-to-eight-hour battery endurance window with a swappable pack architecture to keep utilisation high during peak periods instead of tanking throughput on long charge cycles.</p>
<p>The platform emphasises visibility and legibility at a human scale with a bright red hull and lighting that reads clearly to other road and sidewalk users, which matters when operating near strollers, wheelchairs, scooters, cyclists, and cars.</p>
<p>Sensors and perception stacks combine cameras, radar, and lidar in configurations intended to perceive complex urban scenes where occlusions, construction, and parked trucks often mask cross traffic and pedestrians until the last moment.</p>
<p>The cargo area supports modular inserts such as cup holders or coolers so merchants can secure drinks and temperature-sensitive orders to reduce spills and condensation, because real-world delivery quality depends on small choices that prevent messes and go far beyond route planning.</p>
<p>Every design choice is meant to serve a simple thesis that a slightly larger, faster, and more robust robot than a sidewalk cooler can preserve food quality by moving at neighbourhood speeds without demanding the footprint of a car.</p>
<p>The point is to demonstrate predictability and courtesy, allowing a robot to blend into bike lanes and low-speed roads without becoming an obstacle or an irritant. This is exactly how trust is built, one quiet trip at a time.</p>
<p><strong>The brains behind the wheels</strong></p>
<p>Dot is only as useful as the dispatcher that assigns trips, which is why DoorDash launched an Autonomous Delivery Platform that weighs speed, cost, location, order composition, and conditions to route an order to a robot, a person, a sidewalk bot, or a drone.</p>
<p>SmartScale sits on the merchant side, using AI to validate bag weights, signal readiness, and improve order accuracy so the dispatcher does not send an overweight or mispacked order to a constrained mode that cannot carry it safely.</p>
<p>The idea is to cut idle time and avoid preventable errors, which are the small hinges that swing big doors in unit economics by reducing rework and lowering intervention rates across the fleet. DoorDash says Dot has been tested across millions of simulated and real-world miles, which reflects an industry-wide shift toward deploying learning machines with structured fallbacks rather than claiming literal full autonomy that ignores operational realities.</p>
<p>Remote assistance and documented handoff procedures are treated as part of the system because real streets throw edge cases constantly, and the fastest way to improve perception, prediction, and planning is to keep the service live while capturing those edge cases for training.</p>
<p>The platform’s advantage lies in more than route choice. It involves orchestration across people and multiple types of robots, enabling each agent to handle what it does best while the system as a whole smooths surges and detours that would otherwise jam a single mode.</p>
<p>That is why the early Phoenix footprint includes Tempe and Mesa, where Dot has already navigated bike lanes, parking lots, and sidewalks, placing real stress on the full stack from dispatch to the final handoff at curbs and driveways.</p>
<p>The company and press observers have stressed that safety and reliability matter more than flashy demos and that the metric to watch is not a viral video but consistent on-time deliveries with minimal friction for everyone sharing the lane.</p>
<p><strong>What history says</strong></p>
<p>If this sounds ambitious, it is also tempered by recent history as companies with deep pockets rethought sidewalk robots when support costs and exception handling overwhelmed early optimism.</p>
<p>Amazon scaled back its Scout programme, and FedEx shut down Roxo, illustrating that last-mile autonomy is a grind that punishes naive scaling plans and underestimates of real-world complexity.</p>
<p>Coverage of those decisions emphasised that robotics remains a strategic pillar at both companies even as they redirected resources away from costly field tests that did not meet near-term value requirements.</p>
<p>The lesson is less that robots cannot deliver and more that the operational design domain matters, which is why Dot’s remit includes streets, bike lanes, and driveways rather than constraining itself to narrow sidewalks with constant obstacles.</p>
<p>It also explains DoorDash’s hybrid posture that centres Dashers and multiple modes, because a blended network can keep service flowing when a single mode would stall due to rules, blockages, or unexpected detours.</p>
<p>Meanwhile, Serve Robotics has shown an urban path with sidewalk bots integrated onto platforms like Uber Eats, crossing the 1000-robot milestone and reiterating plans to reach about 2000 deployed by the end of 2025.</p>
<p>Serve’s disclosures focus attention on the levers that decide winners in autonomy: utilisation, intervention rates, and software revenue per unit, rather than raw robot counts, which is why cutting remote assists and idle time is the boring frontier that matters most.</p>
<p>DoorDash’s scale as the largest American food delivery marketplace could provide a data advantage if its dispatcher consistently routes robot-fit orders to bots while keeping humans on the hairier trips, improving network flow without stepping on the customer experience.</p>
<p><strong>True test of success</strong></p>
<p>The near-term markers to watch are pragmatic expansion pace in Phoenix, the diversity of merchants participating, and any disclosures around delivery completion times and intervention rates once the honeymoon phase gives way to the long tail of weird Tuesdays.</p>
<p>Local rules and public sentiment will shape the path because cities are still figuring out how to regulate small delivery vehicles on sidewalks and bike lanes in ways that protect accessibility and safety for everyone sharing the space.</p>
<p>Company materials and media coverage have underscored Dot’s ability to blend into bike lanes and low-speed roads without becoming a hazard, a design mission that must be lived on the street day after day rather than asserted on a stage.</p>
<p>If Dot consistently expands the range of trips where a small electric self-driving vehicle is the fastest, most affordable, and least impactful option, it will become a commonplace utility. That commonality will then be the true measure of success.</p>
<p>If interventions stay high and public patience runs thin, the platform will fall back on Dashers for routes robots cannot handle economically at scale, and the orchestration layer will remain the product that quietly allocates work to the right hands and wheels.</p>
<p>In the end, the case for Dot is a system shot, and Phoenix will show whether brains and form factor can outpace the city’s appetite for new edge cases while maintaining speed, safety, and goodwill.</p>
<p>So here is the test that matters: not the demo reel but the daily grind of orders, lanes, curb cuts, and human patience that does not care about press releases. Can an AI dispatcher keep choosing the right agent and shaving minutes without fraying nerves or spilling soup when the bike lane is blocked, and the driveway is tight?</p>
<p>If Dot keeps interventions low and completion times tight, the economics tip from novelty to inevitability and scale follows quietly. If exceptions dominate and goodwill thins, the platform routes work back to people and redraws the robot’s map with humility. Phoenix is only chapter one, and the verdict arrives when dinner arrives hot and on time, which is the only referendum that counts.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/is-dot-the-future-of-last-mile-delivery/">Is Dot the future of last-mile delivery?</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/is-dot-the-future-of-last-mile-delivery/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Lip-Bu Tan’s brutal Intel reset</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/lip-bu-tans-brutal-intel-reset/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=lip-bu-tans-brutal-intel-reset</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/lip-bu-tans-brutal-intel-reset/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 19:35:23 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[engineers]]></category>
		<category><![CDATA[Falcon Shores]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Jaguar Shores]]></category>
		<category><![CDATA[layoffs]]></category>
		<category><![CDATA[Lip-Bu Tan]]></category>
		<category><![CDATA[money]]></category>
		<category><![CDATA[NVIDIA]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[workforce]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54946</guid>

					<description><![CDATA[<p>When the board appointed Lip-Bu Tan as Intel CEO in March 2025, they handed the keys to a demolition expert</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/lip-bu-tans-brutal-intel-reset/">Lip-Bu Tan’s brutal Intel reset</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If you walked into Intel’s Santa Clara headquarters in late 2024, you could practically feel the anxiety vibrating through the linoleum. The company that had once defined Silicon Valley (the firm that put the &#8220;silicon&#8221; in the valley) was bleeding out. The ambitious &#8220;IDM 2.0&#8221; strategy championed by former CEO Pat Gelsinger had turned into a money pit, draining cash reserves to build massive factories in Ohio and Germany while the company’s actual products lost ground to AMD and NVIDIA. By the time the board accepted Gelsinger&#8217;s resignation in December, the company was staring down a fiscal abyss, reporting an annual net revenue loss that would eventually hit $18.8 billion.</p>
<p>The semiconductor industry loves a good comeback story, but what happened next was a total teardown. When the board appointed Lip-Bu Tan as CEO in March 2025, they handed the keys to a demolition expert. Tan, a venture capitalist and the architect of Cadence Design Systems&#8217; 3,200% stock rise, had spent months on Intel’s board complaining about &#8220;bloat&#8221; and a &#8220;risk-averse culture&#8221; before resigning in frustration in August 2024. Now, he was back, and he wasn&#8217;t asking for permission to change things. He was demanding a revolution.</p>
<p>From his appointment in March through the closing days of December 2025, Tan orchestrated perhaps the most aggressive corporate restructuring in modern tech history. He slashed the workforce, sold off prized assets, nationalised part of the company, and even took money from his fiercest rival, all to fund a “Hail Mary” pass on a new AI architecture.</p>
<p><strong>Breaking the frozen middle</strong></p>
<p>To understand why Lip-Bu Tan&#8217;s arrival felt like an earthquake, you have to understand the soil conditions he inherited. For years, Intel had been plagued by what insiders called the &#8220;frozen middle,&#8221; layers upon layers of middle management that insulated decision-makers from engineering realities.</p>
<p>In his first town hall meeting in April 2025, Tan didn&#8217;t mince words. He stood on stage and told the assembled staff that the outside world was seeing the company as &#8220;too slow, too complex, and too set in our ways.&#8221; He urged them to be &#8220;brutally honest&#8221; about their failings, a shocking admission for a company that had spent decades drinking its own Kool-Aid.</p>
<p>Lip-Bu Tan’s philosophy was simple. Engineers should run engineering companies. He had been horrified to discover that some project teams at Intel were five times larger than comparable teams at AMD, yet produced inferior work. The problem was a proliferation of &#8220;meetings about work&#8221; replacing the actual work.</p>
<p>Managers were incentivised to grow their headcount rather than their output. Tan declared war on this metric immediately. In a memo titled &#8220;Our Path Forward,&#8221; he explicitly stated that the size of a manager’s team would no longer be a badge of honour.</p>
<p>The resulting purge was swift and painful. Throughout the spring and summer of 2025, Tan initiated a &#8220;systematic review&#8221; of the workforce that went far beyond the standard corporate trimming. By the end of the year, Intel had cut approximately 33,000 roles, bringing the headcount down from nearly 109,000 to around 75,000.</p>
<p>The marketing, HR, and administrative divisions were decimated, but Tan also took a scalpel to product management teams he felt were creating &#8220;roadmap noise,&#8221; generating requirements for products that would never be profitable.</p>
<p>Apart from firing people and rewiring the organisational chart, Tan elevated the leaders of process technology, design, and manufacturing directly to the “Executive Team,” bypassing the business unit general managers who had previously acted as gatekeepers. If you were building the chips, you now had a direct line to the CEO.</p>
<p>If you were managing the people who built the chips, you were likely looking for a new job. It was a brutal cultural reset, designed to reduce &#8220;decision latency&#8221; and force the company to move at the speed of the AI market, not the speed of an internal committee.</p>
<p><strong>Selling silver to save the ship</strong></p>
<p>While Lip-Bu Tan was fixing the culture, he also had to fix the bank account. The &#8220;Smart Capital&#8221; strategy of the previous era had left Intel cash-poor just as it needed to buy expensive High-NA EUV lithography machines for its new factories.</p>
<p>The company needed liquidity, and it needed it fast. Tan looked at Intel’s sprawling portfolio and decided to amputate everything that wasn&#8217;t essential to the core mission of making high-performance logic.</p>
<p>The biggest casualty was “Altera,” the programmable chip unit Intel had acquired in 2015. For a decade, Intel had clung to the &#8220;integrationist&#8221; philosophy that Field Programmable Gate Arrays (FPGAs) would eventually be merged into the CPU package for every data centre server. Lip-Bu Tan saw this for what it was: a distraction.</p>
<p>In April 2025, he pulled the trigger on a deal to sell a 51% controlling stake in Altera to the private equity firm Silver Lake for $8.75 billion. This deal was significant for the cash it generated and for what it signalled. By turning Altera into an independent entity, Tan was effectively admitting that the integration strategy had failed. It freed Altera to partner with whoever it wanted (even ARM or RISC-V vendors), and it also rescued Intel from the operational headache of managing a completely different silicon architecture. The $8.75 billion injection was a lifeline, allowing Intel to keep the lights on in its Arizona and Ohio construction sites without resorting to high-interest debt that would have crippled its balance sheet.</p>
<p>Lip-Bu Tan further used the fiscal year 2025 to &#8220;kitchen sink&#8221; every bit of bad news he could find. The company took massive write-downs and restructuring charges, leading to ugly quarterly earnings reports that would have panicked a less experienced CEO. But Tan knew that to rebuild, he first had to clear the rubble. He was willing to sacrifice short-term stock performance and endure the headlines about &#8220;record losses&#8221; to reset the baseline for 2026. It was a classic private equity move executed on a public market stage, and it stripped the asset down to its studs so as to rebuild it properly.</p>
<p><strong>Goodbye Falcon, hello Jaguar</strong></p>
<p>Financial engineering can save a balance sheet, but only product engineering can save a tech company. And in early 2025, Intel’s product roadmap was a mess. The company had missed the generative AI boat entirely.</p>
<p>Its &#8220;Gaudi 3&#8221; accelerator, launched to compete with NVIDIA’s H100, was a commercial dud. Despite offering decent specs on paper, it lacked the software ecosystem to break NVIDIA’s CUDA moat, and enterprise customers largely ignored it.</p>
<p>Worse, the next big hope, a chip called &#8220;Falcon Shores,&#8221; was dead on arrival. Originally billed as a revolutionary &#8220;XPU&#8221; that would combine CPU and GPU cores on a single die, Falcon Shores had suffered from shifting specs and delays. By the time Tan took over, it was clear that even if they launched it, it would be a &#8220;me-too&#8221; product arriving too late to matter. In a move that shocked industry watchers, Tan cancelled the commercial launch of “Falcon Shores,” relegating it to an &#8220;internal test vehicle.&#8221;</p>
<p>He decided to skip a generation. Instead of fighting NVIDIA’s current lineup, Tan pointed the company toward late 2027 and a new architecture called &#8220;Jaguar Shores.&#8221; This was a bet on &#8220;rack-scale&#8221; computing. Tan realised that in the age of massive Large Language Models (LLMs), the unit of compute wasn&#8217;t the chip anymore. It was the entire server rack.</p>
<p>“Jaguar Shores” is designed to be a beast. Leaked specs reveal a massive 92.5mm x 92.5mm package, suggesting a complex multi-tile design stitched together with Intel’s advanced packaging technology. But the real secret sauce is the light. Under Tan, Intel doubled down on Silicon Photonics, a technology that uses light instead of electricity to move data.</p>
<p>The bottleneck in modern AI clusters isn&#8217;t usually the speed of the processor. It&#8217;s the speed at which you can move data between processors. NVIDIA solves this with heavy, power-hungry copper cables. Intel’s Jaguar Shores is designed to use Optical Compute Interconnect (OCI) chiplets that can shoot data across the data centre at the speed of light. Lip-Bu Tan is betting that by 2027, the power limits of copper wire will hit a wall, and Intel’s optical solution will be the only way to build larger AI brains.</p>
<p>To feed this beast, Tan also made a surprising play in memory. He partnered with SoftBank’s subsidiary, Saimemory, to develop a new type of memory called Z-Angle Memory (ZAM). Unlike the standard High Bandwidth Memory (HBM) that is currently in short supply, ZAM uses a diagonal vertical stacking method to pack more density into a smaller space. Intel claims it could offer two to three times the capacity of current memory at half the power. It’s a long shot (prototypes aren&#8217;t due until 2028), but it showed that Tan was done playing catch-up. He was trying to change the rules of the game.</p>
<p><strong>Capital restructuring</strong></p>
<p>By August 2025, even with the Altera money and the layoffs, the math wasn&#8217;t adding up. Building the world’s most advanced chip factories costs hundreds of billions of dollars, and Intel was running on fumes. Lip-Bu Tan realised he couldn&#8217;t do it alone. He needed partners, and he wasn&#8217;t picky about where they came from.</p>
<p>What followed was a capital restructuring so complex and unprecedented that it blurred the lines between private enterprise, national security, and industrial policy. First came the US government. In a historic move, Washington converted $8.9 billion of promised “CHIPS Act” grants into a direct 10% equity stake in Intel. This was a crossing of the Rubicon. Intel was designated a &#8220;National Champion,&#8221; too big to fail and partially owned by the taxpayer. Critics called it &#8220;State Corporatism,&#8221; warning that political pressure could now dictate where Intel built its factories or who it hired. But for Tan, it was survival.</p>
<p>Then came Masayoshi Son. The SoftBank CEO, seeing an opportunity to secure a supply chain for his own AI ambitions, poured $2 billion into Intel stock. This tied Intel’s manufacturing future to the Japanese tech ecosystem and gave Tan a vote of confidence from one of the world’s most aggressive tech investors.</p>
<p>But the real shocker came in September. In a twist that felt like the Yankees investing in the Red Sox, NVIDIA agreed to buy a $5 billion stake in Intel. Why would Jensen Huang prop up his dying rival? It was a calculated hedge, as the chipmaker needed to keep regulators off its back by showing that the market was competitive, and it needed a strong x86 CPU ecosystem to host its GPUs. If Intel collapsed, the data centre market might shift entirely to ARM-based processors, where NVIDIA faces stiffer competition. For Tan, taking money from NVIDIA was a humbling pill to swallow, but it stabilised the stock price and signalled to customers that Intel wasn&#8217;t going anywhere.</p>
<p><strong>The new Intel workforce</strong></p>
<p>What Lip-Bu Tan&#8217;s revolution actually felt like inside Intel was less strategic pivot than controlled demolition. Engineers who had spent careers navigating bureaucracy through weekly syncs and quarterly reviews arrived one Monday to find their entire management chain gone. Directors who once oversaw thirty-person teams now report directly to VPs. Mid-level managers, the connective tissue of old Intel, vanished in weeks, not months.</p>
<p>Lip-Bu Tan deliberately shattered Intel&#8217;s foundational social contract. For decades, joining Intel meant trading startup lottery tickets for something steadier: job security, incremental promotions, the quiet prestige of building the world&#8217;s processors. Engineers are expected to retire with the company. Tan replaced that implicit promise with volatility marketed as meritocracy.</p>
<p>Performance reviews became surgical. Teams were evaluated by taped-out silicon and working chips, not roadmap presentations. Engineers who had optimised for political navigation suddenly found the game unrecognisable.</p>
<p>The response split along generational and temperamental lines. Long-tenured Intel lifers discovered their institutional memory (knowing which VP to cc, which process to invoke) had transformed overnight from asset to liability.</p>
<p>&#8220;Everything I knew about how to get things done here became irrelevant,&#8221; said a fifteen-year veteran who left for AMD. For them, Tan&#8217;s Intel felt like chaos wearing a reform badge.</p>
<p>But others described it as liberation. Younger engineers, frustrated by layers of approval for simple decisions, suddenly had direct access to executives who wanted problems solved, not processes followed.</p>
<p>&#8220;I shipped more in six months under Tan than in three years before,&#8221; one hardware designer said.</p>
<p>Intel became attractive again, but to a different archetype. It was the place for risk-tolerant designers who wanted massive R&amp;D budgets without startup instability, AI systems engineers lured by foundry ambitions, people energised rather than paralysed by existential stakes.</p>
<p>The talent exodus told competing stories. Senior architects departed for NVIDIA&#8217;s AI chip teams or AMD&#8217;s data centre divisions, taking decades of x86 optimisation knowledge with them. But Intel simultaneously pulled engineers from Apple&#8217;s silicon group, poached packaging experts from TSMC suppliers, and hired machine learning systems designers who had never considered Intel before.</p>
<p>The company was haemorrhaging institutional knowledge while injecting outside perspective, losing the people who knew why things were done a certain way, and gaining people who didn&#8217;t care about the old ways at all.</p>
<p>Compensation structures reinforced the shift. Stock grants became more aggressive but tied to specific chip milestones. Bonuses swung wildly based on quarterly execution. Engineers accustomed to predictable compensation discovered their total comp could vary by 30% year-over-year. This was intentional. Tan wanted people motivated by building winning products, not by optimising tenure. It attracted gamblers and builders. It repelled those who valued stability above intensity.</p>
<p>Intel&#8217;s old mantra of &#8220;constructive confrontation,&#8221; spirited debate within supportive structures, gave way to confrontation without cushioning. Town halls where leadership acknowledged uncertainty offered few answers. Slack channels that once buzzed with institutional gossip went strangely quiet. Fear permeated the campuses, yes, but so did clarity. Everyone understood the mandate. It was you who delivered or became irrelevant.</p>
<p>The unresolved question hanging over Intel&#8217;s reinvention is whether a company traumatised by mass layoffs and cultural upheaval can still innovate at the scale required to challenge TSMC and NVIDIA, or whether this kind of creative destruction, brutal as it feels, is precisely what competing in the AI-era silicon demands. Lip-Bu Tan bet everything that trauma and transformation are inseparable. Intel&#8217;s workforce is living the experiment.</p>
<p><strong>The 18A gamble</strong></p>
<p>All of these manoeuvres, the layoffs, the asset sales, the government bailout, were in service of one singular goal: getting the &#8220;18A&#8221; manufacturing process to work. This was the finish line of the &#8220;five nodes in four years&#8221; marathon. Breakthrough 18A was supposed to be the technology that finally put Intel ahead of TSMC, using new &#8220;RibbonFET&#8221; transistors and &#8220;PowerVia&#8221; backside power delivery to make chips faster and more efficient.</p>
<p>For most of 2025, it looked like a disaster. Rumours swirled in the summer that yields (the percentage of functional chips on a wafer) were as low as 10%. The industry whispered that the technology was too complex, that trying to introduce two major innovations at once was suicide. NVIDIA, which had been testing the node for potential use, reportedly &#8220;halted&#8221; its immediate production plans in December—a stinging rebuke.</p>
<p>Yet Tan kept the engineers focused. He refused to let the roadmap slip. And in a photo finish that saved the year, Intel officially announced in late December 2025 that 18A had achieved &#8220;High-Volume Manufacturing&#8221; readiness. They had done it. They had functional chips, the &#8220;Panther Lake&#8221; for laptops and &#8220;Clearwater Forest&#8221; for servers, rolling off the line.</p>
<p>The yield wasn&#8217;t perfect, and the external customer list was still thin, mostly Microsoft and AWS committing to specific designs rather than broad volume. But the technical milestone was achieved. Intel had proved it could still manufacture at the bleeding edge.</p>
<p>As 2026 dawns, Lip-Bu Tan presides over a fundamentally different company than the one he took over. It is smaller, leaner, and partially nationalised. It is tethered to a complex web of alliances with competitors and governments. It has bet its future on an optical AI architecture that won&#8217;t arrive for two years. But for the first time in a long time, the bleeding has stopped. The &#8220;Tan Doctrine&#8221; of 2025 was brutal, ugly, and necessary. He dismantled the old Intel to build a fortress that might just survive the AI wars.</p>
<p>Lip-Bu Tan tore Intel apart and rebuilt it in his own image. The layoffs, asset sales, and alliances with governments and competitors were ruthless, and they left scars. Intel is now a high-stakes, high-pressure machine built for the AI era. Tan has proven that survival requires speed, decisiveness, and a willingness to break sacred cows, but the human cost is enormous. Long-tenured engineers walked out, institutional memory was lost, and the culture is harsher and less forgiving. Yet, the gamble is paying off: 18A works, and the company can compete again. Tan has created a lean, dangerous Intel, one that can fight, innovate, and maybe win, but only if it can maintain focus and avoid imploding under its own intensity.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/lip-bu-tans-brutal-intel-reset/">Lip-Bu Tan’s brutal Intel reset</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/lip-bu-tans-brutal-intel-reset/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Demis Hassabis expands tech throne</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=demis-hassabis-expands-tech-throne</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 19:20:07 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[AlphaFold]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Chess]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[Demis Hassabis]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[London]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54944</guid>

					<description><![CDATA[<p>Demis Hassabis, who had once wished tech giants would move more slowly on AI deployment to ensure safety, was now the man pressing the accelerator</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/">Demis Hassabis expands tech throne</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>On a crisp October morning in 2024, the phone rang in London with a call that every scientist dreams of, yet few dare to expect. The Royal Swedish Academy of Sciences was on the line. Demis Hassabis, the CEO of Google DeepMind, along with his colleague John Jumper, had been awarded the Nobel Prize in Chemistry.</p>
<p>The accolade was not for a new chemical compound synthesised in a beaker but for code, specifically AlphaFold, an artificial intelligence (AI) system that had solved a 50-year-old grand challenge in biology. It predicted the complex three-dimensional structures of proteins accurately.</p>
<p>For Demis Hassabis, this moment was the culmination of a lifelong &#8220;100-year plan&#8221; to solve intelligence and then use it to solve everything else. It was the ultimate validation of the &#8220;Profound,&#8221; the belief that AI is fundamentally a tool for scientific enlightenment, capable of ushering in an era of &#8220;radical abundance&#8221; by curing diseases, designing new materials, and unravelling the mysteries of the universe.</p>
<p>While the scientific community toasted Hassabis as a pioneer of computational biology, the corporate world demanded something far more &#8220;Prosaic.&#8221; As the supreme commander of Google’s AI efforts, Hassabis was essentially a wartime general in the most brutal corporate conflict of the 21st century. His mandate was not just to win Nobel Prizes but to crush competitors like OpenAI and Microsoft in a race for chatbots, web browsers, and ad revenue.</p>
<p>In the same year he accepted the Nobel medal, his teams were pushing out products like &#8220;Nano Banana,&#8221; a viral AI image generator used for solving homework and creating 1880s-style portraits, and fending off OpenAI’s &#8220;ChatGPT Atlas,&#8221; a browser designed to dismantle Google’s monopoly on search.</p>
<p>International Finance will examine the duality of Demis Hassabis and the organisation he leads, exploring the tension between the high-minded pursuit of Artificial General Intelligence (AGI) for scientific discovery and the commercial imperative to dominate the consumer internet.</p>
<p><strong>Polymath pursues intelligence</strong></p>
<p>Demis Hassabis is a polymath whose career has been defined by a singular obsession with the mechanics of intelligence. He was born in London in 1976 to a Greek Cypriot father and a Singaporean mother.</p>
<p>Hassabis displayed a precocious talent for strategy games. By 13, he was a chess master with an Elo rating of 2300, the second-highest rated player in the world for his age, behind only Judit Polgar. Chess taught Hassabis the value of planning, the necessity of sacrifice, and the brutal objectivity of a win-loss record.</p>
<p>However, the game also exposed the limits of the human mind, the shackles of human cognition, and made the young boy realise that we as a species are bound by biology. He soon realised that to surpass his limits, he would need to build a machine that could think.</p>
<p>Demis Hassabis didn’t start with the mind. In the beginning, he built worlds. At 17, he joined Bullfrog Productions, a legendary video game studio founded by Peter Molyneux. There, he served as the lead programmer for Theme Park (1994), a simulation game that sold millions of copies and defined the management genre.</p>
<p>Theme Park was more than a game. It was an exercise in agent-based modelling. It required simulating the desires and behaviours of thousands of little digital visitors. It was a precursor to the complex environments DeepMind would later use to train its AI agents.</p>
<p>Demis Hassabis later founded his own studio, Elixir Studios. Its debut title, “Republic: The Revolution,” was an incredibly ambitious political simulator that promised to model the intricate social dynamics of an entire Eastern European nation. However, the game’s ambition outstripped the hardware capabilities of the time.</p>
<p>Though technically impressive, it was commercially disappointing. The experience was a crucible for Hassabis, teaching him a painful lesson: having a profound vision is useless if you cannot execute it within the constraints of reality. It was a lesson that would serve him well when navigating the corporate politics of Google decades later.</p>
<p>Realising that video games were an insufficient vessel for his ambitions, Hassabis pivoted to academia. He earned a PhD in cognitive neuroscience from University College London (UCL), focusing on episodic memory and the hippocampus. His research sought to understand how the brain encodes past experiences to imagine future scenarios.</p>
<p>It was a critical component of intelligence that was missing from the &#8220;brittle&#8221; AI of the time. In 2010, he co-founded DeepMind Technologies in London with Shane Legg and Mustafa Suleyman. Their mission statement was audacious in its simplicity.</p>
<p><strong>Google acquisition</strong></p>
<p>By 2014, DeepMind had caught the attention of the Silicon Valley giants. Facebook attempted to acquire the lab, but Google eventually won the bid, paying approximately £400 million ($650 million). For Google, the acquisition was a defensive move to secure the world’s best AI talent. For Hassabis, it was a means to access the massive computational resources required to train neural networks.</p>
<p>However, Hassabis was wary of Google’s corporate machinery. He famously negotiated a condition for the sale. He wanted them to establish an &#8220;Ethics Board&#8221; to oversee the deployment of DeepMind’s technology. The Ethics Board remains one of the most enigmatic chapters in AI history.</p>
<p>Initially heralded as a safeguard against the misuse of AGI, it became a symbol of the opacity of “Big Tech.” Years after the acquisition, investigative reports suggested that the board’s membership was never public, and it was unclear if it ever formally convened or exercised any real power.</p>
<p>Demis Hassabis later claimed the board had convened and was &#8220;progressing very well,&#8221; but dismissed enquiries by stating that discussions were confidential. DeepMind operated as a &#8220;state within a state&#8221; inside Google, shielding its academic culture from the commercial pressures of Mountain View. While Google sold ads, DeepMind played Go.</p>
<p>That independence bore fruit in 2016 when AlphaGo, a DeepMind program, defeated Lee Sedol, the world champion of the ancient board game Go. It was a watershed moment for AI, comparable to the Wright Brothers’ first flight. It demonstrated that deep reinforcement learning could produce intuition-like capabilities.</p>
<p>It was what Hassabis called &#8220;creativity.&#8221; But while AlphaGo was a scientific triumph, it made zero dollars. For nearly a decade, DeepMind was a financial black hole, burning through hundreds of millions in Google’s cash while generating negligible revenue.</p>
<p><strong>Fragmented AI efforts</strong></p>
<p>The luxury of operating as an ivory tower ended abruptly in November 2022. The launch of ChatGPT by OpenAI sent shockwaves through Google. Suddenly, the search giant looked vulnerable. Its primary revenue engine, the blue links of Google Search, faced an existential threat from conversational AI.</p>
<p>Google realised that its fragmented AI efforts, split between the product-focused Google Brain team in California and the research-focused DeepMind in London, were a liability. In April 2023, CEO Sundar Pichai announced the unthinkable. He declared the merger of these two rival fiefdoms into a single unit, “Google DeepMind,” with Hassabis as CEO.</p>
<p>It was a culture clash. Google Brain, led by Jeff Dean, had a culture of &#8220;shipping&#8221; and engineering scale. They were the team that invented the Transformer architecture (the &#8220;T&#8221; in GPT) but had failed to capitalise on it. DeepMind was academic, secretive, and focused on long-term AGI rather than consumer products.</p>
<p>No longer just a lab director protecting his scientists from product managers, Hassabis was now the &#8220;Product General&#8221; responsible for saving Google’s business. His mandate was clear. He had to ship a competitor to GPT-4, and do it fast. The merger forced a &#8220;shotgun wedding&#8221; of codebases and philosophies.</p>
<p>DeepMind’s researchers, accustomed to working on protein folding and plasma physics, were redeployed to build chatbots. The tension was palpable. Hassabis, who had once wished tech giants would move more slowly on AI deployment to ensure safety, was now the man pressing the accelerator.</p>
<p><strong>Gemini generalist launch</strong></p>
<p>While AlphaFold was winning prizes, the rest of Google DeepMind was fighting in the mud of the consumer market. The &#8220;Prosaic&#8221; reality of 2024 and 2025 has been defined by a relentless schedule of product releases, some revolutionary, others bizarre.</p>
<p>The flagship response to OpenAI was Gemini, a multimodal model family designed to power everything from Google Search to Android phones. Unlike the specialised AlphaFold, Gemini is a generalist, a jack of all trades designed to write emails, plan vacations, and code software. But the most peculiar skirmish in this war involved a model colloquially known as &#8220;Nano Banana&#8221; (Gemini 2.5 Flash Image).</p>
<p>In late 2025, this image generation tool went viral, not for curing cancer, but for a TikTok trend where users generated portraits of themselves across decades, from the 1880s to 2025. The model also gained notoriety for its ability to solve handwritten math homework, mimicking the user’s own handwriting style so perfectly that it sparked a debate about academic integrity. In one bizarre incident, an employee used it to generate a hyper-realistic image of an injured hand to fake a bike accident and get paid leave, prompting the viral tagline, &#8220;AI just broke HR verification.&#8221;</p>
<p>&#8220;Nano Banana&#8221; drives user engagement, locks people into the Google ecosystem, and demonstrates the &#8220;magic&#8221; of AI to the average consumer. The pricing models for these tools, ranging from free tiers to &#8220;Pro&#8221; subscriptions, are designed to monetise creativity at scale, a stark contrast to the open-science ethos of early DeepMind.</p>
<p>The threat to Google’s dominance intensified in October 2025 with the launch of ChatGPT Atlas, OpenAI’s AI-powered web browser. Atlas represents a paradigm shift. Instead of searching for links (Google’s model), users converse with the web. The browser features &#8220;Agent Mode,&#8221; where the AI can book flights, fill out forms, and summarise pages autonomously.</p>
<p>Atlas is a direct dagger at Chrome’s heart. If users stop searching and start &#8220;asking,&#8221; Google’s ad revenue, the lifeblood of Alphabet, evaporates. Hassabis’s team has responded with “Project Astra,” a universal AI assistant that can see and hear the world, integrated into Gemini Live.</p>
<p><strong>AlphaFold solves mystery</strong></p>
<p>Amidst the chaos of the chatbot wars, Hassabis delivered a reminder of why he started DeepMind in the first place. In 2024, the Nobel Committee recognised AlphaFold, DeepMind’s protein structure prediction system, with the Nobel Prize in Chemistry.</p>
<p>Proteins are the machinery of life. Their function is determined by their 3D shape, but predicting that shape from a string of amino acids is a problem of astronomical complexity. Levinthal’s paradox suggests it would take longer than the age of the universe to brute-force a solution.</p>
<p>AlphaFold 2, released in 2020, solved this. It predicted the structures of nearly all 200 million known proteins with atomic accuracy. The impact was immediate. Researchers used it to design malaria vaccines, understand antibiotic resistance, and develop plastic-eating enzymes. </p>
<p>For Hassabis, the Nobel was proof of his core thesis. He often said that the ultimate goal of AI is not just to create intelligent machines, but to understand intelligence itself.</p>
<p>AlphaFold was the perfect example of AI acting as a multiplier for human ingenuity, a &#8220;Hubble Telescope for biology.&#8221; In interviews following the award, Hassabis emphasised that scientific discovery was the true purpose of AI. </p>
<p>&#8220;I think we’re going to find&#8230; that some jobs get disrupted, but then new, more valuable, usually more interesting jobs get created,&#8221; he noted, framing AI as a tool for &#8220;radical abundance.&#8221;</p>
<p>However, the Nobel Prize also served as a shield. It gave Hassabis the political capital to push back against the complete commercialisation of his lab. It was a signal to the shareholders: “We are not just a chatbot factory. We are the Bell Labs of the 21st century.”</p>
<p><strong>Transparency takes a hit</strong></p>
<p>Training the next generation of AI models requires investment on a scale that rivals the “Manhattan Project.” This financial reality has escalated with the announcement of the “Stargate Project,” a massive $500 billion infrastructure initiative backed by OpenAI, SoftBank, Oracle, and the United States government.</p>
<p>This unprecedented capital injection into Google’s primary rival fundamentally alters the landscape. For Google to compete, it must match this investment dollar for dollar. Alphabet’s stock (GOOGL) has performed well, largely due to the perception that Gemini has stabilised the ship against the Microsoft-OpenAI alliance.</p>
<p>However, the transition from a high-margin search business to a high-cost AI compute business is risky. Every query answered by Gemini costs significantly more than a traditional Google search.</p>
<p>Demis Hassabis has had to make a devil’s bargain. To fund the &#8220;Profound&#8221; (AGI for science), he must win the &#8220;Prosaic&#8221; (commercial AI). &#8220;Commercial products fund science&#8221; is the unspoken mantra. The revenue from Google Cloud and Search pays for the TPUs that power “AlphaFold 3” and “AlphaProteo.” This reality has forced DeepMind to become less open.</p>
<p>The days of publishing every breakthrough in Nature immediately are gone. Now, technical reports are often withheld or redacted to prevent competitors like OpenAI and China’s DeepSeek from gaining an edge. The &#8220;Open&#8221; in OpenAI may be a misnomer, but Google DeepMind has also closed its doors.</p>
<p><strong>Alchemist’s dilemma</strong></p>
<p>Demis Hassabis stands at a crossroads. On one hand, he holds the Nobel Prize, a symbol of AI’s potential to elevate humanity. On the other hand, he holds the keys to the world’s most powerful ad-targeting engine, weaponised with generative AI.</p>
<p>The &#8220;Age of Paranoia,&#8221; fuelled by deepfakes and AI fraud, is rising alongside the &#8220;Age of Abundance&#8221; promised by AlphaFold. Hassabis’s challenge is to navigate this duality. He must ensure that the drive for profit does not corrupt the pursuit of discovery. The &#8220;Nano Banana&#8221; generated portraits and the &#8220;Atlas&#8221; browser wars are the noise of the present. They are the &#8220;Prosaic&#8221; tax that must be paid. But Hassabis’s eyes remain fixed on the horizon, on the &#8220;Profound.&#8221;</p>
<p>The young super-genius has come a long way from his early chess tournaments and video game development days. Hassabis has revolutionised how human beings think and act. His research in AI has also contributed to advancements in biology that would otherwise have taken another century.</p>
<p>No matter how things evolve from this point, Hassabis and his version of ethics will have a profound impact on how AI is used. He is the crusader fighting for the soul of Silicon Valley. Only time will tell whether science and human advancement will triumph against ads and corporate profits.</p>
<p>Demis Hassabis is one of the few individuals in history who simultaneously transformed science and business, which makes him both fascinating and concerning. On one hand, AlphaFold proves that AI can solve problems humans could not solve in decades. On the other hand, the commercial pressures of Google and the chatbot wars show that innovation is tied to profit.</p>
<p>Hassabis is balancing the desire to advance knowledge with the need to dominate markets. How he manages this will define whether AI truly serves humanity or becomes just another tool for corporate control. Right now, his choices are shaping the future of science, ethics, and the very way people interact with technology. </p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/">Demis Hassabis expands tech throne</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The AI leadership test</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/the-ai-leadership-test/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-ai-leadership-test</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/the-ai-leadership-test/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 19:05:53 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[Finance]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[investment]]></category>
		<category><![CDATA[Saudi Arabia]]></category>
		<category><![CDATA[workforce]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54942</guid>

					<description><![CDATA[<p>Research shows that only 5.4% of firms had formally adopted generative AI as of early 2024</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-ai-leadership-test/">The AI leadership test</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The rise of generative AI and agentic AI is an existential imperative, a foundational shift that threatens to redefine what software is, who wields it, and how nations generate wealth.</p>
<p>Mohammed Al-Qarni, an academic and consultant on AI for business, said, “This is a quantum jump in potential productivity, yet history warns us that success hinges entirely on the political and organisational will to design frameworks capable of seizing, not squandering, this opportunity.”</p>
<p>The chilling reality is that this transition is arguably more disruptive than the Software-as-a-Service revolution that preceded it. But what happens when corporations lack the courage to lead? The historical record of digital transformation is littered with the corpses of once-dominant giants, and Kodak serves as the perpetual, damning example.</p>
<p>Despite pioneering digital technology, the company’s strategic reluctance to scale its own innovation, driven by a fear of cannibalising its immensely profitable film business, proved a fatal weakness. Such a protectionist approach and internal cultural resistance led to a catastrophic delay, allowing competitors like Canon and Sony, which had adopted flexible and responsive digital strategies, to capture significant market share.</p>
<p>Survival in a disruptive era demands a willingness to disrupt your own established, profitable business models actively. The transformation required is a radical and holistic overhaul.</p>
<p>Today, the same pattern of institutional failure is visible. While 88% of organisations report utilising AI in at least one business function, showing a clear awareness of the threat, the majority remain dangerously vulnerable. Nearly two-thirds of them confess they have yet to begin scaling the technology across the enterprise, remaining confined to the experimentation or piloting phase.</p>
<p>Such a gap between acknowledgement and action is the single most dangerous vulnerability, demonstrating a failure to establish the strategic and organisational frameworks necessary to manage the disruption. For those who do manage to scale, the financial verdict is already in.</p>
<p>Organisations report an average return on investment of 1.7x on AI and generative AI investments, alongside cost reductions ranging from 26% to 31% across core functions like supply chain and finance. Executives cite tangible improvements, reporting 10% to 20% gains in accuracy, productivity, and time-to-market.</p>
<p>The barriers preventing such scaled adoption are not rooted in technical limitations but in human frailty and strategic myopia. The most frequently cited obstacle is the inability to define clear use cases or establish demonstrable business value.</p>
<p>Such a pattern reflects a failure of imagination, rooted in trying to apply AI to traditional, inefficient problems rather than focusing on “AI-native problems,” which are challenges that become uniquely tractable or profitable only through AI-first thinking.</p>
<p>Compounding this strategic deficit is the internal “human firewall,” and nearly half of CEOs report that employees are resistant or even hostile to AI adoption, often driven by profound anxiety over job security. To overcome this resistance, leadership must invest in upskilling, rewire organisational culture, and establish governance that instils confidence and trust.</p>
<p>Furthermore, even where the will exists, the technical foundation often fails. Businesses consistently identify data quality, availability, and the management of silos as the paramount technical barriers to implementation.</p>
<p>“Without clean, well-organised, and accessible data, advanced models underperform, undermining the entire investment. Agentic AI systems, which require continuous refinement, are particularly dependent on real-time data pipelines and robust governance capabilities often incompatible with rigid, older legacy infrastructure,” Al-Qarni stated.</p>
<p><strong>Strategic autonomy</strong></p>
<p>In an era of accelerating technological competition, the AI transition is fundamentally a geopolitical contest, where national strategy is the new competitive differentiator. The global economic benefits are colossal. There is $19.9 trillion projected to be injected into the global economy through 2030, a figure accounting for 3.5% of global GDP that year.</p>
<p>That injection is projected to create a permanent increase in economic activity, with compounded GDP levels potentially 1.5% higher by 2035. But here is the critical economic context: global growth is projected to decelerate, slowing from 3.3% in 2024 to 3.2% in 2025, while major development finance providers are cutting aid and adopting a markedly more transactional, geopolitical approach to investment.</p>
<p>The United States, the United Kingdom, France, and Germany have all simultaneously cut aid for the first time in nearly thirty years. Consequently, nations can no longer rely on traditional development finance; they must secure resources and advanced infrastructure through massive, proactive investment and strategic partnerships.</p>
<p>Moreover, the pace of AI innovation is inextricably linked to the regulatory landscape, and flexible regulatory environments, such as that in the United States, are already projected to outperform those with more rigid frameworks, confirming that policy itself is a critical competitive lever.</p>
<p>Against this backdrop of global competition and shrinking fiscal space, Saudi Arabia’s comprehensive strategy, anchored in the ambitious economic diversification strategy named “Vision 2030,” provides a clear, state-led template for achieving strategic autonomy and leapfrogging competitors.</p>
<p>Artificial intelligence is positioned as the core technology driving economic diversification beyond oil and building a knowledge-based economy. The National Strategy for Data and AI (NSDAI), established in 2020 by the Saudi Data &#038; AI Authority (SDAIA), sets extremely aggressive, non-negotiable targets to rank among the world&#8217;s top 15 nations in AI by 2030.</p>
<p>Massive financial and infrastructural commitments underpin that ambition. The Kingdom aims to attract SAR 75 billion ($20 billion) in AI investments by 2030, covering both local funding and foreign direct investment (FDI) for data and AI initiatives.</p>
<p>Such committed capital is necessary to secure the foundational computational power, demonstrated by strategic partnerships already accelerating the buildout, including the $10 billion, five-year collaboration between AMD and Humain to deploy up to 500 megawatts of AI infrastructure by early 2026, and a $5 billion-plus “AI Zone” partnership with Amazon Web Services (AWS) and Humain.</p>
<p>By aggressively attracting billions in investment from global leaders, the Kingdom is designed to mitigate dependency on transactional global aid and secure continuous access to advanced chip technology, thereby establishing critical strategic autonomy in the global AI race.</p>
<p>Critically, the NSDAI also prioritises policy flexibility, aiming to enact “the most welcoming legislation” for data and AI businesses and talent, including fast-track approvals and IP protections.</p>
<p>Furthermore, recognising that infrastructure is meaningless without talent, the strategy mandates training over 20,000 data and AI specialists to transform the national workforce. Such a comprehensive approach to investment, infrastructure, policy, and human capital serves as the blueprint for securing strategic advantage.</p>
<p><strong>Human-AI value shift</strong></p>
<p>To capture the true value of AI, organisations must discard incrementalism and adopt an AI-first operating model rooted in autonomy. The process begins with an “automation-first mindset,” redesigning processes to embed AI capabilities as core mission enablers, while ensuring systems are modular and interoperable to avoid vendor lock-in.</p>
<p>The primary goal is to streamline workflows and reduce manual effort, unlocking operational savings that can be strategically reinvested into high-value, mission-critical areas. The real disruption lies in embracing agentic AI. There are autonomous agents capable of complex decision-making and orchestrating workflows that rigid legacy systems simply cannot support.</p>
<p>The transition requires disciplined execution; the failure of projects like Volkswagen’s Cariad highlights the danger of strategic overreach, where an attempt is made to deliver a complete, custom technology stack without necessary sequencing and ruthless scope control.</p>
<p>The economic consequences of the transition are profound, resting on the fundamental restructuring of service value. As automation commoditises efficiency, the value proposition shifts dramatically. Professional services will become the most valuable service line, transitioning from transactional execution to strategy-first advisory, guiding organisations on how to architect and implement these complex, autonomous systems.</p>
<p>Simultaneously, managed services will ascend to focus on autonomous orchestration, while support services experience heavy automation at the core, refocusing human expertise onto the premium edges—complex diagnostics and bespoke problem-solving that require critical thinking.</p>
<p>For nations like Saudi Arabia, targeting the training of 20,000 specialists, this predictive shift confirms that training must prioritise advanced advisory, architectural, and integration skills, the core competencies of the high-value professional services sector, to ensure the nation captures the top tier of economic value.</p>
<p>Such a transformation is fundamentally about engineering a robust partnership between human judgment and machine intelligence, establishing systems that are more creative, resilient, and adaptable than either could be in isolation.</p>
<p>While AI excels at processing vast datasets and identifying patterns, it cannot critically apply human judgment, question assumptions, and navigate ethical complexities. Consequently, the most valuable human skills in the AI era will be critical thinking, ethical reasoning, and domain expertise, which assess, refine, and guide AI outputs.</p>
<p>Crucially, the strategic deployment of AI acts as a powerful mechanism for improving overall workforce performance. Studies show AI tools provided a 43% performance increase for lower-performing consultants, compared to 17% for high performers, demonstrating their power to lift the operational baseline of the entire organisation. To realise these systemic productivity gains, organisations must move beyond informal “shadow IT” use.</p>
<p>For chief strategy officers and chief digital officers, the path forward is clear. They must redesign for autonomy, prioritise human-AI complementarity by formalising adoption and reskilling the workforce, and govern and measure strategically.</p>
<p>Only by moving beyond basic ROI and aggressively tracking “Trust and Adoption Velocity” can organisations ensure they are building sustainable, resilient competitive advantage in the new economic epoch.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/the-ai-leadership-test/">The AI leadership test</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/the-ai-leadership-test/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
