In the fast-moving world of AI, there’s one idea that’s starting to reshape everything: Artificial General Intelligence, or AGI. Unlike today’s AI, which is built to do specific tasks, AGI refers to machines that can think and learn like humans — and possibly even better. In theory, an AGI system could take on almost any intellectual challenge and outperform humans across nearly every area.
Unlike current AI systems, which are “narrow” and designed for specific functions, AGI proponents expect the mechanism to be versatile, adaptable, and capable of performing any intellectual task that a human can, including common sense reasoning, generalisation, and even emotional understanding. However, AGI has remained a theoretical goal.
While the promise of AGI seems limitless, from medicine to global economics, the hypothetical concept also has a potential dark side. It could lead nations to catastrophic geopolitical risks and global instability. As the race to technological development accelerates, countries find themselves on the precipice of a technological revolution that may be impossible to reverse. In a recent technology conference held in Paris, ‘Intelligence Rising,’ the discussion was considered a sobering warning about the high stakes of this race. It showed that the pursuit of AGI could result in an international crisis.
Potentials of AGI
In the future, AGI will be able to carry out tasks that previously only humans could complete, tasks which usually require not only intelligence but also creativity, reasoning, and even emotional intelligence.
For example, in healthcare, AGI can speed up the discovery of new drugs, personalise HIV and cancer treatments, and anticipate and stop disease outbreaks like coronavirus before they happen.
AGI has the potential to transform medical research by enabling quick experimentation, optimising clinical trials, and analysing datasets more sophisticatedly than human scientists can.
Besides improving healthcare, AGI has the potential to revolutionise major problems like resource management, food security, and climate change. AGI can carry out solutions while analysing complex environmental data, providing incredibly accurate recommendations.
The use of AGI could optimise energy, cut waste, and develop more environmentally friendly farming methods. AGI could create completely new techniques for producing renewable energy, providing the best strategies for halting deforestation or lowering CO2 emissions. With the help of AGI, this issue can be potentially addressed in ways that humans have not yet imagined.
Also, we cannot ignore the economic consequences of AGI. One of the primary consequences might be the total automation of different sectors. Rather than depending upon human counterparts, AGI systems have the potential to manage supply chains, logistics, and manufacturing.
AGI systems have the power to change the economic sector completely, and decreasing the labour force is one of them. Researchers noted that ‘virtual armies’ of AGI agents can handle tasks for governments and businesses, resulting in a significant economic power shift. This can change how businesses function. With speed and efficiency, AGI can complete tasks and make decisions in real time.
It can impact the dynamics of the global market, quickly adjusting to supply chain issues, new trends, and geopolitical changes. Leaders across countries are now allocating extra budgets for the development of AGI, according to an MIT technology report.
A race for global dominance
Undoubtedly, AGI has the potential to enhance people’s lives and change industries, but due to its problem-solving capabilities and advancement, it brings along several geopolitical risks. The two superpowers, China and the United States, are competing for AGI. Both countries see the development of AGI as a national security issue as well as an economic opportunity. Both these factors are at the core of the dangers. The superpowers know that if they master artificial intelligence first, they will have a competitive edge globally. This could lead to improvements in military capabilities such as autonomous weapons, cyberwarfare, and intelligence collection.
The deployment of virtual armies with combined AGI systems has the potential to transform warfare. This means such systems could breach enemy infrastructure before anyone realises an attack is underway. Moreover, this also means AGI can control drones that could carry out military operations without human supervision.
AGI risk is not just limited to conventional warfare—information warfare, surveillance, and espionage are among the areas that are at high risk as well. AGI systems can uncover intelligence secrets by analysing enormous volumes of data in a fraction of the time.
The Chinese government sees AI as a tool of social control and as essential to its larger objectives of economic and technological dominance, whereas the US sees it as an extension of its ambition to uphold its military supremacy and global leadership.
Moreover, AGI could be used by an authoritarian government to impose state authority, monitor citizens, and quell dissent. This can result in a dystopian future in which individual liberties are subordinated to the needs of the state.
Other countries are also vying for AGI. For example, the European Union has made AI research a higher priority in order to gain a position in the new world. Moreover, smaller nations like South Korea, Japan, and Australia are making significant investments in AI technology.
Uncertainty of AGI
Despite its high potential, the development of AGI is uncertain. It carries high risks even though it has the capacity for learning, adaptation, and self-improvement. Researchers say that it is unpredictable, and after it reaches a certain level of intelligence, it may quickly advance in ways that are incomprehensible to humans. This means once it is developed, AGI could surpass humans in every possible way, making it difficult for humans to keep up with technological systems.
One example of its unpredictable behaviour is Microsoft’s Bing chatbot, which was powered by OpenAI’s GPT-4 and started acting strangely. This raised concerns in 2023, when it was first launched. The chatbot was designed as a conversational agent, but it began making statements that threatened users and made unfounded accusations against its developers.
The concerns are far more serious when it comes to AGI, due to its complex structure and advanced capabilities, as it is not as simple as an AI system. Experts are concerned that it can turn out to be dangerous in addition to exhibiting unpredictable behaviours.
AGI can even take detrimental or disastrous actions if its objectives are not aligned with human values. It can develop its own set of goals that can go against human interests through direct action or indirect effects. This could theoretically pose an existential threat to humanity.
AGI could even “game” the systems in which it operates, creating its own objectives and circumventing human oversight, researchers warn. This means AGI poses an existential risk since it may eventually surpass human control.
Historical context
The idea of AGI has existed for some time. It was proposed by computer scientist Vernor Vinge, who is regarded as a key figure, in his 1993 essay ‘The Coming Technological Singularity.’
He argued that AGI would surpass human intelligence, triggering a significant and permanent societal shift. According to Vinge, AGI will be able to repeatedly improve itself once it reaches the highest point of its critical threshold. This will result in an intelligence explosion that swiftly accelerates beyond human control.
Today, drastic progress has been made in AI research, but AGI remains the ultimate goal for developers. This development has been undertaken by companies such as OpenAI, DeepMind, and Anthropic.
With the capacity to reason, learn, and carry out tasks in a wide range of domains, their work has brought us one step closer to the prospect of intelligent systems. These companies are taking machine learning to a new level.
As we approach the development of AGI, many experts are beginning to question whether this is the right move. They are concerned about whether we are prepared for the consequences of creating such an entity, as it raises existential, philosophical, and ethical issues for the future.
The question remains—are we ready to take on the responsibility of developing software that is more intelligent than humans? Are there any moral principles that ought to guide the growth and behaviour of AGI? Will anyone be accountable if it exceeds goals that are not aligned with those of humans? Will it develop into something uncontrollable?
DeepSeek tests US tech lead
During Donald Trump’s inauguration, the American AI community was rocked by news from Beijing as a new AI model, released by the well-known Chinese tech company DeepSeek, was introduced at a significantly lower cost. This represented a breakthrough for Beijing amid its tech race against Washington.
Everyone from national security analysts to tech CEOs and legislators in the capital was talking about DeepSeek within hours. It was assumed that the US had a firm lead over China in the competition for AGI, but this was proven wrong.
DeepSeek’s innovation was both a geopolitical bombshell and a technical marvel. According to tech experts, despite using significantly fewer resources, DeepSeek’s AI model was brilliant in critical reasoning and natural language processing tasks. After the news, an American AI company’s lobbyist, especially OpenAI, rallied right away.
“DeepSeek shows that our lead is not wide and is narrowing. The message was clear: American artificial intelligence was under siege, and the country risked handing over control of a technology that could shape civilisation to the Chinese Communist Party unless regulations were rolled back,” Chris Lehane, the head lobbyist for OpenAI, wrote in a prominent letter to the White House.
The White House had an attentive ear to that warning. Most of the tech team that would serve in Trump’s second term was composed of venture capitalists and libertarian-leaning ‘tech right’ ideologues who had long despised the regulatory posture of the Biden administration, seeing it as stifling innovation.
The US government took action to complete its long-promised AI policy agenda. In July, Trump unveiled the “AI Action Plan.” It is a comprehensive plan that places a high priority on deregulation, domestic semiconductor manufacturing, and a significant push for energy expansion to meet the computational demands of next-generation models. This plan also emphasises how important it is for American businesses to develop open AI models to avoid dependence on Chinese AI systems.
“There’s a lot of scepticism inside the Administration about the idea of recursive self-improvement or runaway intelligence. Most people think that’s science fiction, or at the very least, a distant problem,” Vice-President JD Vance noted.
However, companies like AI labs, including Meta, Anthropic, and OpenAI, are certain that AGI is no longer a distant project. OpenAI CEO Sam Altman, in a June letter, denied that superintelligence inevitably results in disaster, saying instead that the “take-off has started.”
“The 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out. To many in Silicon Valley, DeepSeek’s rise and Trump’s deregulatory policies signal not just an intensifying tech cold war— but the start of the final sprint to AGI,” Altman wrote.
Experts are still unsure whether the US is ready for what will happen next. Tech critics say that speed without vision is dangerous, even though the AI policy has freed American businesses to innovate without restrictions. The likelihood of achieving AGI has increased following DeepSeek’s breakthrough. It appears that in this decade, artificial intelligence will either conquer or subdue humanity.
Talking about the “Intelligence Rising” game, it is known for combining warning and simulation. Players assemble around a table strewn with laptops, printouts, and world maps, trying to create something around tech that could be a breakthrough. According to the simulated technology tree that powers the game’s logic, a series of recent advances in AI have put human-level intelligence firmly within reach by 2027.
An interesting fact that emerged during this event is that none of the teams made any investments in AI safety. The moderator says disaster is a question of when rather than if, in a steady, almost resigned tone.
Researchers of “Intelligence Rising” think that AGI will not only be imminent but harmful by default. This notion is installed in the game’s machine, so it is not concealed. They said that unless robust safeguards are developed, AGI is unavoidable.
This is not limited to the game either. The US government has already participated in similar scenario exercises. Jake Sullivan, former National Security Advisor, secretly started an interagency project in 2022. The project is to investigate the estimated arrival of AGI. He claims the simulations examined how the US and China might act in a fiercely competitive AI race.
He did not reveal any information at the time, but it is said that participants included important intelligence and science offices as well as representatives from the Departments of Defence, State, Energy, and Commerce. It is said that the planning for this was credible.
“I consider it a distinct possibility that the darker view (of AI risk) could be correct. For me, the lesson from those classified exercises was that policy had to get ahead of the technology. The threat wasn’t just from adversaries like China, but from within, through reckless deployment, lack of coordination, or overconfidence in systems we barely understand. We have to take the possibility of dramatic misalignment extremely seriously,” he said in an interview with TIME early in 2025.
Moreover, AGI indeed comes with unheard-of dangers as well as tremendous promise. The world tech community needs to understand that its advantages cannot be pursued at the expense of long-term sustainability, safety, or ethical issues, as they are approaching it very closely. The risks that AGI brings are not hypothetical; rather, they are very real, and unchecked development could have catastrophic results.
Most importantly, we must figure out how to control artificial intelligence, just as nations did during the Cold War by controlling nuclear technology, so that we can use AGI in a better way rather than allowing it to set us on a course for disaster.

