<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AGI Archives - International Finance</title>
	<atom:link href="https://internationalfinance.com/tag/agi/feed/" rel="self" type="application/rss+xml" />
	<link>https://internationalfinance.com/tag/agi/</link>
	<description>International Finance - Financial News, Magazine and Awards</description>
	<lastBuildDate>Thu, 12 Mar 2026 12:31:02 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Demis Hassabis expands tech throne</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=demis-hassabis-expands-tech-throne</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 19:20:07 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[AlphaFold]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Chess]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[Demis Hassabis]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[London]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54944</guid>

					<description><![CDATA[<p>Demis Hassabis, who had once wished tech giants would move more slowly on AI deployment to ensure safety, was now the man pressing the accelerator</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/">Demis Hassabis expands tech throne</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>On a crisp October morning in 2024, the phone rang in London with a call that every scientist dreams of, yet few dare to expect. The Royal Swedish Academy of Sciences was on the line. Demis Hassabis, the CEO of Google DeepMind, along with his colleague John Jumper, had been awarded the Nobel Prize in Chemistry.</p>
<p>The accolade was not for a new chemical compound synthesised in a beaker but for code, specifically AlphaFold, an artificial intelligence (AI) system that had solved a 50-year-old grand challenge in biology. It predicted the complex three-dimensional structures of proteins accurately.</p>
<p>For Demis Hassabis, this moment was the culmination of a lifelong &#8220;100-year plan&#8221; to solve intelligence and then use it to solve everything else. It was the ultimate validation of the &#8220;Profound,&#8221; the belief that AI is fundamentally a tool for scientific enlightenment, capable of ushering in an era of &#8220;radical abundance&#8221; by curing diseases, designing new materials, and unravelling the mysteries of the universe.</p>
<p>While the scientific community toasted Hassabis as a pioneer of computational biology, the corporate world demanded something far more &#8220;Prosaic.&#8221; As the supreme commander of Google’s AI efforts, Hassabis was essentially a wartime general in the most brutal corporate conflict of the 21st century. His mandate was not just to win Nobel Prizes but to crush competitors like OpenAI and Microsoft in a race for chatbots, web browsers, and ad revenue.</p>
<p>In the same year he accepted the Nobel medal, his teams were pushing out products like &#8220;Nano Banana,&#8221; a viral AI image generator used for solving homework and creating 1880s-style portraits, and fending off OpenAI’s &#8220;ChatGPT Atlas,&#8221; a browser designed to dismantle Google’s monopoly on search.</p>
<p>International Finance will examine the duality of Demis Hassabis and the organisation he leads, exploring the tension between the high-minded pursuit of Artificial General Intelligence (AGI) for scientific discovery and the commercial imperative to dominate the consumer internet.</p>
<p><strong>Polymath pursues intelligence</strong></p>
<p>Demis Hassabis is a polymath whose career has been defined by a singular obsession with the mechanics of intelligence. He was born in London in 1976 to a Greek Cypriot father and a Singaporean mother.</p>
<p>Hassabis displayed a precocious talent for strategy games. By 13, he was a chess master with an Elo rating of 2300, the second-highest rated player in the world for his age, behind only Judit Polgar. Chess taught Hassabis the value of planning, the necessity of sacrifice, and the brutal objectivity of a win-loss record.</p>
<p>However, the game also exposed the limits of the human mind, the shackles of human cognition, and made the young boy realise that we as a species are bound by biology. He soon realised that to surpass his limits, he would need to build a machine that could think.</p>
<p>Demis Hassabis didn’t start with the mind. In the beginning, he built worlds. At 17, he joined Bullfrog Productions, a legendary video game studio founded by Peter Molyneux. There, he served as the lead programmer for Theme Park (1994), a simulation game that sold millions of copies and defined the management genre.</p>
<p>Theme Park was more than a game. It was an exercise in agent-based modelling. It required simulating the desires and behaviours of thousands of little digital visitors. It was a precursor to the complex environments DeepMind would later use to train its AI agents.</p>
<p>Demis Hassabis later founded his own studio, Elixir Studios. Its debut title, “Republic: The Revolution,” was an incredibly ambitious political simulator that promised to model the intricate social dynamics of an entire Eastern European nation. However, the game’s ambition outstripped the hardware capabilities of the time.</p>
<p>Though technically impressive, it was commercially disappointing. The experience was a crucible for Hassabis, teaching him a painful lesson: having a profound vision is useless if you cannot execute it within the constraints of reality. It was a lesson that would serve him well when navigating the corporate politics of Google decades later.</p>
<p>Realising that video games were an insufficient vessel for his ambitions, Hassabis pivoted to academia. He earned a PhD in cognitive neuroscience from University College London (UCL), focusing on episodic memory and the hippocampus. His research sought to understand how the brain encodes past experiences to imagine future scenarios.</p>
<p>It was a critical component of intelligence that was missing from the &#8220;brittle&#8221; AI of the time. In 2010, he co-founded DeepMind Technologies in London with Shane Legg and Mustafa Suleyman. Their mission statement was audacious in its simplicity.</p>
<p><strong>Google acquisition</strong></p>
<p>By 2014, DeepMind had caught the attention of the Silicon Valley giants. Facebook attempted to acquire the lab, but Google eventually won the bid, paying approximately £400 million ($650 million). For Google, the acquisition was a defensive move to secure the world’s best AI talent. For Hassabis, it was a means to access the massive computational resources required to train neural networks.</p>
<p>However, Hassabis was wary of Google’s corporate machinery. He famously negotiated a condition for the sale. He wanted them to establish an &#8220;Ethics Board&#8221; to oversee the deployment of DeepMind’s technology. The Ethics Board remains one of the most enigmatic chapters in AI history.</p>
<p>Initially heralded as a safeguard against the misuse of AGI, it became a symbol of the opacity of “Big Tech.” Years after the acquisition, investigative reports suggested that the board’s membership was never public, and it was unclear if it ever formally convened or exercised any real power.</p>
<p>Demis Hassabis later claimed the board had convened and was &#8220;progressing very well,&#8221; but dismissed enquiries by stating that discussions were confidential. DeepMind operated as a &#8220;state within a state&#8221; inside Google, shielding its academic culture from the commercial pressures of Mountain View. While Google sold ads, DeepMind played Go.</p>
<p>That independence bore fruit in 2016 when AlphaGo, a DeepMind program, defeated Lee Sedol, the world champion of the ancient board game Go. It was a watershed moment for AI, comparable to the Wright Brothers’ first flight. It demonstrated that deep reinforcement learning could produce intuition-like capabilities.</p>
<p>It was what Hassabis called &#8220;creativity.&#8221; But while AlphaGo was a scientific triumph, it made zero dollars. For nearly a decade, DeepMind was a financial black hole, burning through hundreds of millions in Google’s cash while generating negligible revenue.</p>
<p><strong>Fragmented AI efforts</strong></p>
<p>The luxury of operating as an ivory tower ended abruptly in November 2022. The launch of ChatGPT by OpenAI sent shockwaves through Google. Suddenly, the search giant looked vulnerable. Its primary revenue engine, the blue links of Google Search, faced an existential threat from conversational AI.</p>
<p>Google realised that its fragmented AI efforts, split between the product-focused Google Brain team in California and the research-focused DeepMind in London, were a liability. In April 2023, CEO Sundar Pichai announced the unthinkable. He declared the merger of these two rival fiefdoms into a single unit, “Google DeepMind,” with Hassabis as CEO.</p>
<p>It was a culture clash. Google Brain, led by Jeff Dean, had a culture of &#8220;shipping&#8221; and engineering scale. They were the team that invented the Transformer architecture (the &#8220;T&#8221; in GPT) but had failed to capitalise on it. DeepMind was academic, secretive, and focused on long-term AGI rather than consumer products.</p>
<p>No longer just a lab director protecting his scientists from product managers, Hassabis was now the &#8220;Product General&#8221; responsible for saving Google’s business. His mandate was clear. He had to ship a competitor to GPT-4, and do it fast. The merger forced a &#8220;shotgun wedding&#8221; of codebases and philosophies.</p>
<p>DeepMind’s researchers, accustomed to working on protein folding and plasma physics, were redeployed to build chatbots. The tension was palpable. Hassabis, who had once wished tech giants would move more slowly on AI deployment to ensure safety, was now the man pressing the accelerator.</p>
<p><strong>Gemini generalist launch</strong></p>
<p>While AlphaFold was winning prizes, the rest of Google DeepMind was fighting in the mud of the consumer market. The &#8220;Prosaic&#8221; reality of 2024 and 2025 has been defined by a relentless schedule of product releases, some revolutionary, others bizarre.</p>
<p>The flagship response to OpenAI was Gemini, a multimodal model family designed to power everything from Google Search to Android phones. Unlike the specialised AlphaFold, Gemini is a generalist, a jack of all trades designed to write emails, plan vacations, and code software. But the most peculiar skirmish in this war involved a model colloquially known as &#8220;Nano Banana&#8221; (Gemini 2.5 Flash Image).</p>
<p>In late 2025, this image generation tool went viral, not for curing cancer, but for a TikTok trend where users generated portraits of themselves across decades, from the 1880s to 2025. The model also gained notoriety for its ability to solve handwritten math homework, mimicking the user’s own handwriting style so perfectly that it sparked a debate about academic integrity. In one bizarre incident, an employee used it to generate a hyper-realistic image of an injured hand to fake a bike accident and get paid leave, prompting the viral tagline, &#8220;AI just broke HR verification.&#8221;</p>
<p>&#8220;Nano Banana&#8221; drives user engagement, locks people into the Google ecosystem, and demonstrates the &#8220;magic&#8221; of AI to the average consumer. The pricing models for these tools, ranging from free tiers to &#8220;Pro&#8221; subscriptions, are designed to monetise creativity at scale, a stark contrast to the open-science ethos of early DeepMind.</p>
<p>The threat to Google’s dominance intensified in October 2025 with the launch of ChatGPT Atlas, OpenAI’s AI-powered web browser. Atlas represents a paradigm shift. Instead of searching for links (Google’s model), users converse with the web. The browser features &#8220;Agent Mode,&#8221; where the AI can book flights, fill out forms, and summarise pages autonomously.</p>
<p>Atlas is a direct dagger at Chrome’s heart. If users stop searching and start &#8220;asking,&#8221; Google’s ad revenue, the lifeblood of Alphabet, evaporates. Hassabis’s team has responded with “Project Astra,” a universal AI assistant that can see and hear the world, integrated into Gemini Live.</p>
<p><strong>AlphaFold solves mystery</strong></p>
<p>Amidst the chaos of the chatbot wars, Hassabis delivered a reminder of why he started DeepMind in the first place. In 2024, the Nobel Committee recognised AlphaFold, DeepMind’s protein structure prediction system, with the Nobel Prize in Chemistry.</p>
<p>Proteins are the machinery of life. Their function is determined by their 3D shape, but predicting that shape from a string of amino acids is a problem of astronomical complexity. Levinthal’s paradox suggests it would take longer than the age of the universe to brute-force a solution.</p>
<p>AlphaFold 2, released in 2020, solved this. It predicted the structures of nearly all 200 million known proteins with atomic accuracy. The impact was immediate. Researchers used it to design malaria vaccines, understand antibiotic resistance, and develop plastic-eating enzymes. </p>
<p>For Hassabis, the Nobel was proof of his core thesis. He often said that the ultimate goal of AI is not just to create intelligent machines, but to understand intelligence itself.</p>
<p>AlphaFold was the perfect example of AI acting as a multiplier for human ingenuity, a &#8220;Hubble Telescope for biology.&#8221; In interviews following the award, Hassabis emphasised that scientific discovery was the true purpose of AI. </p>
<p>&#8220;I think we’re going to find&#8230; that some jobs get disrupted, but then new, more valuable, usually more interesting jobs get created,&#8221; he noted, framing AI as a tool for &#8220;radical abundance.&#8221;</p>
<p>However, the Nobel Prize also served as a shield. It gave Hassabis the political capital to push back against the complete commercialisation of his lab. It was a signal to the shareholders: “We are not just a chatbot factory. We are the Bell Labs of the 21st century.”</p>
<p><strong>Transparency takes a hit</strong></p>
<p>Training the next generation of AI models requires investment on a scale that rivals the “Manhattan Project.” This financial reality has escalated with the announcement of the “Stargate Project,” a massive $500 billion infrastructure initiative backed by OpenAI, SoftBank, Oracle, and the United States government.</p>
<p>This unprecedented capital injection into Google’s primary rival fundamentally alters the landscape. For Google to compete, it must match this investment dollar for dollar. Alphabet’s stock (GOOGL) has performed well, largely due to the perception that Gemini has stabilised the ship against the Microsoft-OpenAI alliance.</p>
<p>However, the transition from a high-margin search business to a high-cost AI compute business is risky. Every query answered by Gemini costs significantly more than a traditional Google search.</p>
<p>Demis Hassabis has had to make a devil’s bargain. To fund the &#8220;Profound&#8221; (AGI for science), he must win the &#8220;Prosaic&#8221; (commercial AI). &#8220;Commercial products fund science&#8221; is the unspoken mantra. The revenue from Google Cloud and Search pays for the TPUs that power “AlphaFold 3” and “AlphaProteo.” This reality has forced DeepMind to become less open.</p>
<p>The days of publishing every breakthrough in Nature immediately are gone. Now, technical reports are often withheld or redacted to prevent competitors like OpenAI and China’s DeepSeek from gaining an edge. The &#8220;Open&#8221; in OpenAI may be a misnomer, but Google DeepMind has also closed its doors.</p>
<p><strong>Alchemist’s dilemma</strong></p>
<p>Demis Hassabis stands at a crossroads. On one hand, he holds the Nobel Prize, a symbol of AI’s potential to elevate humanity. On the other hand, he holds the keys to the world’s most powerful ad-targeting engine, weaponised with generative AI.</p>
<p>The &#8220;Age of Paranoia,&#8221; fuelled by deepfakes and AI fraud, is rising alongside the &#8220;Age of Abundance&#8221; promised by AlphaFold. Hassabis’s challenge is to navigate this duality. He must ensure that the drive for profit does not corrupt the pursuit of discovery. The &#8220;Nano Banana&#8221; generated portraits and the &#8220;Atlas&#8221; browser wars are the noise of the present. They are the &#8220;Prosaic&#8221; tax that must be paid. But Hassabis’s eyes remain fixed on the horizon, on the &#8220;Profound.&#8221;</p>
<p>The young super-genius has come a long way from his early chess tournaments and video game development days. Hassabis has revolutionised how human beings think and act. His research in AI has also contributed to advancements in biology that would otherwise have taken another century.</p>
<p>No matter how things evolve from this point, Hassabis and his version of ethics will have a profound impact on how AI is used. He is the crusader fighting for the soul of Silicon Valley. Only time will tell whether science and human advancement will triumph against ads and corporate profits.</p>
<p>Demis Hassabis is one of the few individuals in history who simultaneously transformed science and business, which makes him both fascinating and concerning. On one hand, AlphaFold proves that AI can solve problems humans could not solve in decades. On the other hand, the commercial pressures of Google and the chatbot wars show that innovation is tied to profit.</p>
<p>Hassabis is balancing the desire to advance knowledge with the need to dominate markets. How he manages this will define whether AI truly serves humanity or becomes just another tool for corporate control. Right now, his choices are shaping the future of science, ethics, and the very way people interact with technology. </p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/">Demis Hassabis expands tech throne</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/demis-hassabis-expands-tech-throne/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AGI: A turning point in technology</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/agi-a-turning-point-in-technology/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=agi-a-turning-point-in-technology</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/agi-a-turning-point-in-technology/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Thu, 04 Dec 2025 07:54:34 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[chatbot]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[DeepSeek]]></category>
		<category><![CDATA[investments]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=54054</guid>

					<description><![CDATA[<p>The deployment of virtual armies with combined AGI systems has the potential to transform warfare</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/agi-a-turning-point-in-technology/">AGI: A turning point in technology</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In the fast-moving world of AI, there’s one idea that’s starting to reshape everything: Artificial General Intelligence, or AGI. Unlike today’s AI, which is built to do specific tasks, AGI refers to machines that can think and learn like humans — and possibly even better. In theory, an AGI system could take on almost any intellectual challenge and outperform humans across nearly every area.</p>
<p>Unlike current AI systems, which are &#8220;narrow&#8221; and designed for specific functions, AGI proponents expect the mechanism to be versatile, adaptable, and capable of performing any intellectual task that a human can, including common sense reasoning, generalisation, and even emotional understanding. However, AGI has remained a theoretical goal.</p>
<p>While the promise of AGI seems limitless, from medicine to global economics, the hypothetical concept also has a potential dark side. It could lead nations to catastrophic geopolitical risks and global instability. As the race to technological development accelerates, countries find themselves on the precipice of a technological revolution that may be impossible to reverse. In a recent technology conference held in Paris, ‘Intelligence Rising,’ the discussion was considered a sobering warning about the high stakes of this race. It showed that the pursuit of AGI could result in an international crisis.</p>
<p><strong>Potentials of AGI</strong></p>
<p>In the future, AGI will be able to carry out tasks that previously only humans could complete, tasks which usually require not only intelligence but also creativity, reasoning, and even emotional intelligence.</p>
<p>For example, in healthcare, AGI can speed up the discovery of new drugs, personalise HIV and cancer treatments, and anticipate and stop disease outbreaks like coronavirus before they happen.</p>
<p>AGI has the potential to transform medical research by enabling quick experimentation, optimising clinical trials, and analysing datasets more sophisticatedly than human scientists can.</p>
<p>Besides improving healthcare, AGI has the potential to revolutionise major problems like resource management, food security, and climate change. AGI can carry out solutions while analysing complex environmental data, providing incredibly accurate recommendations.</p>
<p>The use of AGI could optimise energy, cut waste, and develop more environmentally friendly farming methods. AGI could create completely new techniques for producing renewable energy, providing the best strategies for halting deforestation or lowering CO2 emissions. With the help of AGI, this issue can be potentially addressed in ways that humans have not yet imagined.</p>
<p>Also, we cannot ignore the economic consequences of AGI. One of the primary consequences might be the total automation of different sectors. Rather than depending upon human counterparts, AGI systems have the potential to manage supply chains, logistics, and manufacturing.</p>
<p>AGI systems have the power to change the economic sector completely, and decreasing the labour force is one of them. Researchers noted that ‘virtual armies’ of AGI agents can handle tasks for governments and businesses, resulting in a significant economic power shift. This can change how businesses function. With speed and efficiency, AGI can complete tasks and make decisions in real time.</p>
<p>It can impact the dynamics of the global market, quickly adjusting to supply chain issues, new trends, and geopolitical changes. Leaders across countries are now allocating extra budgets for the development of AGI, according to an MIT technology report.</p>
<p><strong>A race for global dominance</strong></p>
<p>Undoubtedly, AGI has the potential to enhance people&#8217;s lives and change industries, but due to its problem-solving capabilities and advancement, it brings along several geopolitical risks. The two superpowers, China and the United States, are competing for AGI. Both countries see the development of AGI as a national security issue as well as an economic opportunity. Both these factors are at the core of the dangers. The superpowers know that if they master artificial intelligence first, they will have a competitive edge globally. This could lead to improvements in military capabilities such as autonomous weapons, cyberwarfare, and intelligence collection.</p>
<p>The deployment of virtual armies with combined AGI systems has the potential to transform warfare. This means such systems could breach enemy infrastructure before anyone realises an attack is underway. Moreover, this also means AGI can control drones that could carry out military operations without human supervision.</p>
<p>AGI risk is not just limited to conventional warfare—information warfare, surveillance, and espionage are among the areas that are at high risk as well. AGI systems can uncover intelligence secrets by analysing enormous volumes of data in a fraction of the time.</p>
<p>The Chinese government sees AI as a tool of social control and as essential to its larger objectives of economic and technological dominance, whereas the US sees it as an extension of its ambition to uphold its military supremacy and global leadership.</p>
<p>Moreover, AGI could be used by an authoritarian government to impose state authority, monitor citizens, and quell dissent. This can result in a dystopian future in which individual liberties are subordinated to the needs of the state.</p>
<p>Other countries are also vying for AGI. For example, the European Union has made AI research a higher priority in order to gain a position in the new world. Moreover, smaller nations like South Korea, Japan, and Australia are making significant investments in AI technology.</p>
<p><strong>Uncertainty of AGI</strong></p>
<p>Despite its high potential, the development of AGI is uncertain. It carries high risks even though it has the capacity for learning, adaptation, and self-improvement. Researchers say that it is unpredictable, and after it reaches a certain level of intelligence, it may quickly advance in ways that are incomprehensible to humans. This means once it is developed, AGI could surpass humans in every possible way, making it difficult for humans to keep up with technological systems.</p>
<p>One example of its unpredictable behaviour is Microsoft&#8217;s Bing chatbot, which was powered by OpenAI&#8217;s GPT-4 and started acting strangely. This raised concerns in 2023, when it was first launched. The chatbot was designed as a conversational agent, but it began making statements that threatened users and made unfounded accusations against its developers.</p>
<p>The concerns are far more serious when it comes to AGI, due to its complex structure and advanced capabilities, as it is not as simple as an AI system. Experts are concerned that it can turn out to be dangerous in addition to exhibiting unpredictable behaviours.</p>
<p>AGI can even take detrimental or disastrous actions if its objectives are not aligned with human values. It can develop its own set of goals that can go against human interests through direct action or indirect effects. This could theoretically pose an existential threat to humanity.</p>
<p>AGI could even “game” the systems in which it operates, creating its own objectives and circumventing human oversight, researchers warn. This means AGI poses an existential risk since it may eventually surpass human control.</p>
<p><strong>Historical context</strong></p>
<p>The idea of AGI has existed for some time. It was proposed by computer scientist Vernor Vinge, who is regarded as a key figure, in his 1993 essay &#8216;The Coming Technological Singularity.&#8217;</p>
<p>He argued that AGI would surpass human intelligence, triggering a significant and permanent societal shift. According to Vinge, AGI will be able to repeatedly improve itself once it reaches the highest point of its critical threshold. This will result in an intelligence explosion that swiftly accelerates beyond human control.</p>
<p>Today, drastic progress has been made in AI research, but AGI remains the ultimate goal for developers. This development has been undertaken by companies such as OpenAI, DeepMind, and Anthropic.</p>
<p>With the capacity to reason, learn, and carry out tasks in a wide range of domains, their work has brought us one step closer to the prospect of intelligent systems. These companies are taking machine learning to a new level.</p>
<p>As we approach the development of AGI, many experts are beginning to question whether this is the right move. They are concerned about whether we are prepared for the consequences of creating such an entity, as it raises existential, philosophical, and ethical issues for the future.</p>
<p>The question remains—are we ready to take on the responsibility of developing software that is more intelligent than humans? Are there any moral principles that ought to guide the growth and behaviour of AGI? Will anyone be accountable if it exceeds goals that are not aligned with those of humans? Will it develop into something uncontrollable?</p>
<p><strong>DeepSeek tests US tech lead</strong></p>
<p>During Donald Trump&#8217;s inauguration, the American AI community was rocked by news from Beijing as a new AI model, released by the well-known Chinese tech company DeepSeek, was introduced at a significantly lower cost. This represented a breakthrough for Beijing amid its tech race against Washington.</p>
<p>Everyone from national security analysts to tech CEOs and legislators in the capital was talking about DeepSeek within hours. It was assumed that the US had a firm lead over China in the competition for AGI, but this was proven wrong.</p>
<p>DeepSeek’s innovation was both a geopolitical bombshell and a technical marvel. According to tech experts, despite using significantly fewer resources, DeepSeek’s AI model was brilliant in critical reasoning and natural language processing tasks. After the news, an American AI company’s lobbyist, especially OpenAI, rallied right away.</p>
<p>“DeepSeek shows that our lead is not wide and is narrowing. The message was clear: American artificial intelligence was under siege, and the country risked handing over control of a technology that could shape civilisation to the Chinese Communist Party unless regulations were rolled back,” Chris Lehane, the head lobbyist for OpenAI, wrote in a prominent letter to the White House.</p>
<p>The White House had an attentive ear to that warning. Most of the tech team that would serve in Trump&#8217;s second term was composed of venture capitalists and libertarian-leaning &#8216;tech right&#8217; ideologues who had long despised the regulatory posture of the Biden administration, seeing it as stifling innovation.</p>
<p>The US government took action to complete its long-promised AI policy agenda. In July, Trump unveiled the “AI Action Plan.” It is a comprehensive plan that places a high priority on deregulation, domestic semiconductor manufacturing, and a significant push for energy expansion to meet the computational demands of next-generation models. This plan also emphasises how important it is for American businesses to develop open AI models to avoid dependence on Chinese AI systems.</p>
<p>“There’s a lot of scepticism inside the Administration about the idea of recursive self-improvement or runaway intelligence. Most people think that’s science fiction, or at the very least, a distant problem,” Vice-President JD Vance noted.</p>
<p>However, companies like AI labs, including Meta, Anthropic, and OpenAI, are certain that AGI is no longer a distant project. OpenAI CEO Sam Altman, in a June letter, denied that superintelligence inevitably results in disaster, saying instead that the &#8220;take-off has started.&#8221;</p>
<p>“The 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out. To many in Silicon Valley, DeepSeek&#8217;s rise and Trump’s deregulatory policies signal not just an intensifying tech cold war— but the start of the final sprint to AGI,” Altman wrote.</p>
<p>Experts are still unsure whether the US is ready for what will happen next. Tech critics say that speed without vision is dangerous, even though the AI policy has freed American businesses to innovate without restrictions. The likelihood of achieving AGI has increased following DeepSeek’s breakthrough. It appears that in this decade, artificial intelligence will either conquer or subdue humanity.</p>
<p>Talking about the “Intelligence Rising” game, it is known for combining warning and simulation. Players assemble around a table strewn with laptops, printouts, and world maps, trying to create something around tech that could be a breakthrough. According to the simulated technology tree that powers the game&#8217;s logic, a series of recent advances in AI have put human-level intelligence firmly within reach by 2027.</p>
<p>An interesting fact that emerged during this event is that none of the teams made any investments in AI safety. The moderator says disaster is a question of when rather than if, in a steady, almost resigned tone.</p>
<p>Researchers of “Intelligence Rising” think that AGI will not only be imminent but harmful by default. This notion is installed in the game&#8217;s machine, so it is not concealed. They said that unless robust safeguards are developed, AGI is unavoidable.</p>
<p>This is not limited to the game either. The US government has already participated in similar scenario exercises. Jake Sullivan, former National Security Advisor, secretly started an interagency project in 2022. The project is to investigate the estimated arrival of AGI. He claims the simulations examined how the US and China might act in a fiercely competitive AI race.</p>
<p>He did not reveal any information at the time, but it is said that participants included important intelligence and science offices as well as representatives from the Departments of Defence, State, Energy, and Commerce. It is said that the planning for this was credible.</p>
<p>“I consider it a distinct possibility that the darker view (of AI risk) could be correct. For me, the lesson from those classified exercises was that policy had to get ahead of the technology. The threat wasn’t just from adversaries like China, but from within, through reckless deployment, lack of coordination, or overconfidence in systems we barely understand. We have to take the possibility of dramatic misalignment extremely seriously,” he said in an interview with TIME early in 2025.</p>
<p>Moreover, AGI indeed comes with unheard-of dangers as well as tremendous promise. The world tech community needs to understand that its advantages cannot be pursued at the expense of long-term sustainability, safety, or ethical issues, as they are approaching it very closely. The risks that AGI brings are not hypothetical; rather, they are very real, and unchecked development could have catastrophic results.</p>
<p>Most importantly, we must figure out how to control artificial intelligence, just as nations did during the Cold War by controlling nuclear technology, so that we can use AGI in a better way rather than allowing it to set us on a course for disaster.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/agi-a-turning-point-in-technology/">AGI: A turning point in technology</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/agi-a-turning-point-in-technology/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mark Zuckerberg’s risky ‘AGI’ &#038; ‘Hawaii’ bet</title>
		<link>https://internationalfinance.com/magazine/technology-magazine/mark-zuckerbergs-risky-agi-hawaii-bet/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mark-zuckerbergs-risky-agi-hawaii-bet</link>
					<comments>https://internationalfinance.com/magazine/technology-magazine/mark-zuckerbergs-risky-agi-hawaii-bet/#respond</comments>
		
		<dc:creator><![CDATA[IFM Correspondent]]></dc:creator>
		<pubDate>Wed, 20 Mar 2024 18:41:36 +0000</pubDate>
				<category><![CDATA[Magazine]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AGI]]></category>
		<category><![CDATA[Artificial General Intelligence]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Hawaii]]></category>
		<category><![CDATA[Kauai]]></category>
		<category><![CDATA[Mansions]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[sugarcane]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[Zuckerberg]]></category>
		<guid isPermaLink="false">https://internationalfinance.com/?p=49514</guid>

					<description><![CDATA[<p>Mark Zuckerberg wants Meta's AGI to be a transparent and inclusive one, while mitigating issues surrounding the AI capabilities</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/mark-zuckerbergs-risky-agi-hawaii-bet/">Mark Zuckerberg’s risky ‘AGI’ &#038; ‘Hawaii’ bet</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Meta CEO Mark Zuckerberg is in the news again. His venture has now joined the race to turn the concept called Artificial General Intelligence (AGI) into a reality. Meta will be competing against San Francisco-based AI start-up OpenAI, which has already outlined its AGI plans.</p>
<p>Artificial General Intelligence is all about creating software with human-like intelligence and the ability to self-teach itself. The software should be able to perform tasks that the tool is not trained/developed for.</p>
<p><strong>What Meta is up to?</strong></p>
<p>Meta is training its next-gen model Llama 3, apart from building a massive computing infrastructure to support the company’s future roadmap on the AGI front.</p>
<p>AGI will have the ability to mimic human performance across tasks. Given the fact that this particular innovation will be the next holy grail of the tech sector, Meta wants to eat the pie when it is hot.</p>
<p>Given the fact that AGI will be mimicking human performance, critics have been uneasy. They are after Zuckerberg for taking an &#8220;irresponsible approach&#8221; to AGI. The tech boss&#8217; detractors think that Meta may make AGI available to the public in future, which may lead to a situation where &#8220;AI will evade human control and eventually take over humanity.&#8221;</p>
<p>Dame Wendy Hall, a professor of Computer Science at the University of Southampton, told The Guardian that the prospect of an open-source AGI was ‘very scary’. Hall, who is also a member of the United Nation’s advisory body on AI, also lashed out at Zuckerberg for playing around with the AGI, as she believed that the technology, if fell into the wrong hands, could do a great deal of harm.</p>
<p>However, Meta is going ahead with its plan, under which Nvidia’s H100 GPU chips will power the tech giant&#8217;s computing infrastructures for AGI-related projects. Zuckerberg wants Meta&#8217;s AGI to be a transparent and inclusive one, while mitigating issues surrounding the AI capabilities.</p>
<p><strong>Criticism galore</strong></p>
<p>David Thiel, a big data architect and chief technologist of the Stanford Internet Observatory, told RollingStone, “Honestly, the ‘general intelligence’ bit is just as vaporous as ‘the metaverse,” as he found Meta&#8217;s AGI claims a pretentious one, something which gives the venture &#8220;an argument that they’re being as transparent about the tech as possible. But any models they release publicly are going to be a small subset of what they actually use internally.”</p>
<p>Sarah Myers West, managing director of the research non-profit ‘AI Now Institute’, explained Zuckerberg’s announcement as something that “reads clearly like a PR tactic meant to garner goodwill, while obfuscating what’s likely a privacy-violating sprint to stay competitive in the AI game.” Myers, like Thiel, found the AGI pitch less than convincing.</p>
<p>Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford’s Institute for Ethics in AI, speculated that Meta could start with something like Llama and expand from there.</p>
<p>“I imagine that they will focus their attention on large language models, and will probably be going more in the multimodal direction, meaning making these systems capable with images, audio, video,” he said, while comparing Meta&#8217;s efforts with Google‘s Gemini, which got released in December 2023.</p>
<p>Conitzer, however, stated that while there were dangers to open-sourcing large language model-based technology, &#8220;The alternative of just developing these models behind the closed doors of profit-driven companies also raises problems.&#8221;</p>
<p>Experts are also flagging Meta&#8217;s not-so-good history on the privacy front.<br />
“They have access to massive amounts of highly sensitive information about us, but we just don’t know whether or how they’re putting it to use as they invest in building models like Llama 2 and 3. Meta has proven time and time again it can’t be trusted with user data before you get to the endemic problems in LLMs with data leakage. I don’t know why we’d look the other way when they throw ‘open source’ and ‘AGI’ into the mix,” Sarah Myers West commented.</p>
<p>As per Conitzer, human civilisation is facing a future where AI systems like Meta’s have “ever more detailed models of individuals.”</p>
<p>“Maybe in the past, I shared some things publicly and I thought each of those things individually wasn’t harmful to share. But I didn’t realise that AI could draw connections between the various things that I posted, and the things that others posted, and that it would learn something about me that I really didn’t want out there,” he added further.</p>
<p>Remember the October 2023 Guardian article, which spoke about Meta had to argue in an Australian court, while fighting a case over the Cambridge Analytica breach, in which tens of millions of users’ data was harvested using a personality quiz and used to aid political campaigns, including former United States President Donald Trump’s election campaign in 2020.</p>
<p>In front of the Australian judge, the social media company had to state that private messages, pictures, email addresses and the content of Facebook users’ posts were not “sensitive information”.</p>
<p>Then in August 2023, X (rebranded Twitter) users shared screenshots of Threads’ (Meta&#8217;s newest flagship social media product) privacy policy from Apple’s App Store. Threads reportedly indicated gathering personal details from its users, ranging from health and financial data to browsing history and location, details which could be passed on to advertisers.</p>
<p>In another 2023 example, the European Union fined Meta a record $1.3 billion after finding the Facebook parent broke its privacy laws by transferring user data from Europe to the United States.</p>
<p>Just the above three examples are good enough to show that experts&#8217; scepticism around Meta&#8217;s AGI efforts is not completely unfounded.</p>
<p><strong>Zuckerberg&#8217;s Hawaii bet</strong></p>
<p>&#8220;Off the two-lane highway that winds along the northeast side of the Hawaiian island of Kauai, on a quiet stretch of ranchland between the tourist hubs of Kapaa and Hanalei, an enormous, secret construction project is underway. A six-foot wall blocks the view from a nearby road fronting the project, where cars slow to try to catch a glimpse of what’s behind it. Security guards stand watch at an entrance gate and patrol the surrounding beaches on ATVs. Pickup trucks roll in and out, hauling building materials and transporting hundreds of workers,&#8221; reports the WIRED on a project, where (as per the news agency&#8217;s sources) the workers have been told to maintain utmost secrecy about what they are working on.</p>
<p>Nobody working on this project is allowed to talk about what they’re building. Almost anyone who passes compound security is bound by a strict nondisclosure agreement, according to several workers involved in the project. And, they say, these agreements aren’t a formality.</p>
<p>Multiple workers claim they saw or heard about colleagues removed from the project for posting about it on social media. Different construction crews within the site are assigned to separate projects and workers are forbidden from speaking with other crews about their work.</p>
<p>&#8220;The project is so huge that a not-insignificant share of the island is bound by the NDA. But everyone here knows who is behind it. Mark Zuckerberg, CEO of Meta, who bought the land in a series of deals beginning in August 2014,&#8221; WIRED stated further, hinting at whose brainchild the &#8216;project&#8217; might be.</p>
<p>WIRED, after interviewing several stakeholders associated with the &#8216;project&#8217;, apart from accessing public records and court documents, suggested that the roughly 1,400-acre compound, known as &#8216;Koolau Ranch&#8217;, will include a 5,000-square-foot underground shelter, have its own energy and food supplies, and, when coupled with land purchase prices, will cost in excess of $270 million. The project has been relying upon legal manoeuvring and political networking, apart from reportedly showing disregard for the local public&#8217;s concerns.</p>
<p>&#8216;Koolau Ranch&#8217; is located on Kauai, the oldest and smallest of the four main Hawaiian Islands. Kauai is a tight-knit community of about 73,000 people, who are mostly the descendants of Native Hawaiians, along with Chinese, Japanese, Filipino, and Puerto Rican migrants who came to work the sugarcane plantations in the late 19th and early 20th centuries.</p>
<p>&#8220;Some of the more recent arrivals come from the US mainland and other Pacific islands. When plantation owners moved their operations overseas in search of cheaper labour, the island’s sugarcane economy was replaced by tourism. Workers on the Zuckerberg site are part of a growing construction industry focused on luxury home builds for mainlanders looking to move to paradise,&#8221; The WIRED stated further.</p>
<p>&#8220;Tall tales about the compound and its owner run rampant on the local rumour mill—known colloquially as the &#8216;coconut wireless.&#8217; One person heard that Zuckerberg was building a vast underground city. Many people speculate that the site will become some sort of post-apocalyptic bunker in case of civilisation collapse. What’s being built doesn’t live up to the coconut wireless chatter, but it&#8217;s close. Detailed planning documents obtained by WIRED through a series of public record requests show the makings of an opulent techno-Xanadu, complete with underground shelter and what appears to be a blast-resistant door,&#8221; the publication noted further.</p>
<p>The property is centred around two mansions with a total floor area comparable to a professional football field, while containing multiple elevators, offices, conference rooms, and an industrial-sized kitchen.</p>
<p>&#8220;In a nearby wooded area, a web of 11 disk-shaped tree houses are planned, which will be connected by intricate rope bridges, allowing visitors to cross from one building to the next while staying among the treetops. A building on the other side of the main mansions will include a full-size gym, pools, sauna, hot tub, cold plunge, and tennis court. The property is dotted with other guest houses and operations buildings. The scale of the project suggests that it will be more than a personal vacation home — Zuckerberg has already hosted two corporate events at the compound,&#8221; the report noted further.</p>
<p>&#8220;The plans show that the two central mansions will be joined by a tunnel that branches off into a 5,000-square-foot underground shelter, featuring living space, a mechanical room, and an escape hatch that can be accessed via a ladder,&#8221; WIRED stated, with one of its sources even narrating that there were cameras everywhere in the property, with more than 20 cameras are taking care of one smaller ranch operations building alone.</p>
<p>&#8220;Many of the compound’s doors are planned to be keypad-operated or soundproofed. Others, like those in the library, are described as &#8216;blind doors,&#8217; made to imitate the design of the surrounding walls. The door in the underground shelter will be constructed out of metal and filled in with concrete—a style common in bunkers and bomb shelters,&#8221; WIRED continued further.</p>
<p>The compound will have its own water supply, along with a 1,400 acre agricultural property. The utmost secrecy around the project gives the impression as if the world is getting its next &#8216;Area 51&#8217;.</p>
<p>As per a Kauai journalist named Allan Parachini, publishing news on the &#8216;Koolau Ranch&#8217; will result in the local press getting &#8216;reprimanded&#8217;. Throughout 2017, Parachini requested permits to know about the property. After his opinion piece on the &#8216;Koolau Ranch&#8217;, Parachini was informed by a &#8216;Local Zuckerberg Representative&#8217; about the Meta CEO&#8217;s team &#8216;not communicating&#8217; with the journalist for any future coverage.</p>
<p>Despite Meta’s chequered history with its data privacy practices, Zuckerberg is known for being a perfectionist, when it comes to protecting his privacy. In 2004, Zuckerberg reportedly requested that two student journalists sign an NDA (Non-Disclosure Agreement) before an interview. In 2010, when one of his employees leaked product plans to the media, Zuckerberg demanded the leaker’s immediate resignation.</p>
<p>Facebook’s contracted content monitors have reportedly been made to sign NDAs. These professionals can&#8217;t discuss anything about their working conditions publicly.</p>
<p>Zuckerberg&#8217;s Kauai neighbour, Hope Kallai, saw a six-foot wall being erected around portions of the Meta boss&#8217; property in 2016, ensuring privacy for his family. However, it reportedly denied the local residents to enjoy the ocean view in the process. Also, the island is seeing a massive influx of outsiders (mostly construction workers), along with heavy vehicle flows and the resultant noise pollution.</p>
<p>On Kauai, if a private construction happens within a conservation zone known as a &#8216;Special Management Area&#8217;, it results in a public review process. The Meta boss&#8217;s project, however, hasn&#8217;t been put inside such protected zones. Still, as per Kallai, a community meeting on the project “would be really welcome.”</p>
<p>Zuckerberg has been facing bad press regarding his Kauai project. To mitigate that, he and his wife, Priscilla Chan have reportedly launched a local charity, called the &#8216;Chan Zuckerberg Kauai Community Fund&#8217;, which has given over $20 million to various Kauai non-profits since 2018. The couple has also reportedly established a relationship with Kauai mayor Derek Kawakami, and held meetings with the official to discuss funding local initiatives during a 2018 flooding crisis and the COVID outbreak in the island. In March 2021, Zuckerberg and Chan helped to relaunch a county jobs programme with a $4.2 million donation and gave $3.5 million to local COVID-19 assistance projects.</p>
<p><strong>A course correction?</strong></p>
<p>In November 2021, Zuckerberg reportedly gave a $4 million gift to fund the purchase of a traditional Hawaiian fishpond managed by Malama Huleia, a local non-profit that focuses on wetland restoration through native Hawaiian practices. That non-profit also had ties to local government, with the then vice chair of the county council Mason Chock serving as its president.</p>
<p>Brandi Hoffine Barr, spokesperson for the &#8216;Chan Zuckerberg Initiative&#8217;, informed the WIRED about the Meta boss and his team continuously trying to engage with the Kauai community.</p>
<p>Through donations, Zuckerberg and Chan are reportedly now among the most important philanthropists on Kauai. Local Facebook pages regularly feature &#8216;Appeals to Zuckerberg&#8217; to fix the island&#8217;s problems.</p>
<p>However, the question remains, will the local community on Kauai ever accept Zuckerberg?</p>
<p>“Zuckerberg’s presence may increase charity, but will not address the root causes of why we need this type of philanthropic charity in the first place,” says Nikki Cristobal, executive director of local Hawaiian education and arts non-profit Kamawaelualani.</p>
<p>The WIRED report claims that Kauai locals view the billionaire as a part of a larger machine, &#8220;the same one that has been buying up Hawaiian land since the &#8216;Great Mahele&#8217; authorised private land ownership in 1848.&#8221;</p>
<p>Zuckerberg may not feel himself entitled to sit and clarify his &#8216;AGI Vision&#8217; to his detractors. However, in Kauai&#8217;s case, a similar &#8216;I Don&#8217;t Care&#8217; approach may not work for the Meta boss.</p>
<p>The post <a href="https://internationalfinance.com/magazine/technology-magazine/mark-zuckerbergs-risky-agi-hawaii-bet/">Mark Zuckerberg’s risky ‘AGI’ &#038; ‘Hawaii’ bet</a> appeared first on <a href="https://internationalfinance.com">International Finance</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://internationalfinance.com/magazine/technology-magazine/mark-zuckerbergs-risky-agi-hawaii-bet/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
