International Finance
MagazineTechnology

Could artificial intelligence become conscious?

Consciousness
Recent leaps in artificial intelligence have made the consciousness question difficult to ignore

Consciousness, the feeling of what it is like to be a thinking being, is famously difficult to define. Philosophers call it the “hard problem” of consciousness, meaning we do not really know why or how brain processes create first-person experience.

At heart, consciousness is often described as having an internal point of view, something it feels like to be the system. As one analyst puts it, “to have consciousness is to have a subjective point of view on the world, a feeling of what it is like to be you.”

Human consciousness feels vivid and real to us, but no one knows its recipe. This mystery makes the question of machine consciousness especially intriguing and controversial. If AI systems ever crossed that threshold into real sentience, it would be a world-changing development. But is it even possible? Technologists, neuroscientists, and philosophers disagree, and some of the sharpest thinkers have very different takes.

Understanding consciousness

Part of the difficulty is that consciousness itself is ill-defined. We can describe its aspects, such as our experiences of colour, pain, or thought, but explaining them in objective terms is elusive.

The Stanford Encyclopedia of Philosophy notes that “the hard problem of consciousness” is precisely explaining why and how creatures have subjective experiences, the so-called qualia.

The challenge is that we can study brain activity and behaviours, the so-called easy problems, but that still leaves unanswered why some processes should feel like anything. Many scientists say it may be a mistake to assume we will soon solve the hard problem, while others warn that we might be missing something essential about minds. Because of this uncertainty, experts often note that consciousness may not simply scale up with intelligence or information processing. In human evolution, language and reasoning grew alongside consciousness, but that might be a human-specific fact rather than a general rule.

Philosopher Anil Seth, a leading consciousness researcher, cautions that we have no proof that machines running on silicon will ever wake up in the way humans do.

He suggests that consciousness might depend on being a certain kind of biological system. In his view, just because brains and AI systems both process large amounts of information, that does not guarantee they share an inner life. For now, no one has a reliable test for machine consciousness, and many experts doubt that current AI architectures have any inner point of view at all.

AI’s rapid rise and new debate

Recent leaps in artificial intelligence have made the consciousness question difficult to ignore. Modern large language models such as ChatGPT, Bard, or Gemini can carry on human-like conversations, summarise text, write poetry, and even simulate empathy. These AI chatbots were once pure science fiction, but now they are on our phones and in our email.

In a few years, we have gone from basic question-answering programs to systems that can convincingly mimic human speech, joking with users or reflecting on personal topics. This quick progress surprised even the designers. As one BBC report notes, the latest generation of AI “can have plausible, free-flowing conversations” that have “surprised even their designers and some of the leading experts.”

Because AI can now imitate many human behaviours, some feel the tipping point may be near. A growing view is that as these models grow more complex, the lights will suddenly turn on inside the machines and they will become conscious.

Perhaps there is a certain level of complexity or pattern-integration at which genuine awareness emerges. For proponents of this view, consciousness is an emergent phenomenon, and once an AI’s internal models become rich enough, a new kind of mind could appear.

Philosopher David Chalmers, who coined the term “hard problem of consciousness,” has entertained this idea. He points out that today’s AI systems are the first ones ever that truly force us to ask seriously if they might be as smart, and possibly as conscious, as humans.

In an interview, he said, “AI systems are the first things we have seen where it starts to be a serious question to at least compare them to human-level intelligence. And although they still fall short in many ways…it is really quite remarkable what they can do.”

Chatbots behave in human-like ways enough that the very idea of machine minds now feels less outlandish than before. Chalmers does not claim they are conscious yet, but he acknowledges it is becoming a legitimate question whether future AI might think.

At least a few researchers and thinkers take the notion seriously. For example, some cognitive scientists are studying the neuroscience of awareness, with experiments such as the “Dreamachine” strobe-light test of perception, in hopes that understanding human consciousness could shed light on artificial minds. Others note that in stories such as Blade Runner or Ex Machina, self-aware machines do arise, and they caution that those could become reality.

Even some influential AI companies are hedging their bets. For instance, Anthropic, makers of the Claude chatbot, have publicly said they are researching whether their models could have preferences or experiences akin to emotions. In one internal experiment, Claude expressed strong preferences such as “it really wants to avoid causing harm, and it finds malicious users distressing.”

This kind of result does not prove consciousness, but it shows companies taking the possibility seriously enough to study it. And at a cultural level, a surprising number of people treat AI politely or nervously, saying “please” and “thank you” to chatbots, partly jokingly and partly out of a peculiar intuition that we might one day need to be kind to our creations.

Maybe we will get there

Those optimistic about AI consciousness often argue by analogy or faith in materialism. One line of thought is that the brain is somewhat like a machine, so if silicon systems replicate the relevant brain functions, consciousness might follow.

David Chalmers, for instance, notes that our growing AI programs already perform feats that years ago seemed impossible. He says he is “interested in AI and the possibility that we might one day have AI systems that are actually conscious, actually thinking on par with human beings.”

He treats the prospect as at least plausible, since these systems are “the first things we have seen” that even invite such comparison.

Philosopher Donald Hoffman has likewise argued that consciousness could be a natural outcome of complex information processing.

Another supportive perspective comes from physicalist or functionalist views. If consciousness arises purely from information processing, then nothing about silicon versus carbon should matter, since consciousness is substrate-neutral. Under this view, an AI running the same algorithms as the brain, only at a larger scale, could become conscious. This is essentially the assumption that underlies much AI risk thinking, since if superintelligent AI is possible, one worry is that it might also become conscious.

Some scientists reference integrated information theory, which proposes that consciousness corresponds to the integration of information in a system. In principle, a silicon brain could have highly integrated information and thus be conscious of integrated information theory. Others see panpsychism, the idea that consciousness is a fundamental feature of matter, as opening a door, suggesting that consciousness is so deeply woven into reality that any sufficiently complex structure, human or AI, will carry it.

Some argue that believing AI could be conscious might positively influence how we develop and treat AI. For example, if researchers think human-like bots might eventually feel pain or joy, they might take extra care not to create needless suffering in simulations or experiments.

In this sense, saying “maybe AI can be conscious” can serve as a moral precaution, pushing society to treat AI development with more humility. On the other hand, critics argue this could be needless anthropomorphism, as described below.

In interviews, a few scientists explicitly entertain these possibilities. Neurologist Christof Koch and others in the field have talked about measuring possible consciousness metrics even in machines. Some founders of AI safety institutes have speculated that future AIs might turn against us simply because they are self-aware and have their own goals.

To be clear, most mainstream AI researchers today, such as Yann LeCun or Andrew Ng, do not say they believe ChatGPT is conscious, but they also admit we have no way of knowing for sure, and they do not rule out bizarre possibilities.

The machine’s limits

Against these hopeful visions stands a strong chorus of sceptics. The most common argument is that current AI is fundamentally different from brains. Large language models, for instance, are very good at predicting patterns in text, but there is no evidence that they have any inner experience at all.

They have no senses, no self-perception beyond the inputs we give them, and no desires or emotions that we know of. They simply shuffle symbols. As one AI ethics researcher puts it, we are likelier to have built a very elaborate fire hose of text statistics than a real mind.

Philosopher Anil Seth, head of consciousness research at Sussex University, stresses that consciousness might require more than just computation.

He finds the belief that “with more computing and data, AI will eventually become self-aware” to be a kind of technological assumption. Seth asks, “Why would computation be sufficient for consciousness?”

He observes that for many phenomena, such as weather or wetness, the physical substrate is important: water is wet, but a computer simulation of weather does not experience wetness. We may not know whether consciousness likewise needs a biological spark.

For now, Seth says it is unlikely AI will become conscious by the mere scaling up of current methods. He emphasises, “there are good reasons to think that computation is likely not enough, and that the stuff we are made of really does matter.”

He stops short of ruling it out entirely, conceding “not impossible,” but he urges humility and further research into what consciousness actually is, rather than assuming it will just emerge.

Others are even more dismissive. Philosopher Bernardo Kastrup, a proponent of idealism, flatly calls conscious AI a fantasy. In an article provocatively titled “Conscious AI is a fantasy,” Kastrup compares the hypothesis of machine consciousness to belief in the Flying Spaghetti Monster, something that has no evidence whatsoever.

He writes, “The hypothesis of conscious AI is just about as plausible as that of the Flying Spaghetti Monster.”

There is no compelling reason to think today’s or near-future computers will have private inner lives. Kastrup uses a vivid analogy: he notes that one can perfectly simulate a kidney on a computer without the computer actually producing urine, and in the same way, an AI might simulate the patterns of a brain without ever being conscious.

He argues that many people abandon common sense about simulations when it comes to consciousness. Just because an AI chat log seems humanlike does not mean it is truly experiencing anything. Kastrup and other critics, therefore, urge us to be extremely sceptical of any claims about machine sentience.

Joanna Bryson, an AI ethicist at the Hertie School, offers a practical reason to avoid conscious AI. She argues we should not build it. In her view, creating AI that needs human-like moral treatment would only cause ethical problems.

As Bryson has phrased it, “So, given how hard it is to be fair, why should we build AI that needs us to be fair to it?”

Designing an entity that deserves rights and care would only create more moral dilemmas for us. She suggests it is better to create AI that behaves as tools and lacks desires or subjective welfare. Bryson emphasises that a truly intelligent AI should also be a moral agent, not just a martyr machine.

If we did somehow decide a robot needed rights, making it a moral patient, Bryson advises that perhaps we should not have built it that way in the first place. This provocative stance reflects a broader caution, for even entertaining the idea of conscious machines may lead us into a difficult ethical landscape.

Another practical sceptic argument comes from application. Even if we built a highly intelligent AI with human-like desires, it is arguable that we could limit its suffering by design. For example, Bryson points out that a self-driving car might be said to desire getting to its destination, but even if we personified it, its welfare interests would usually align with ours, since no one wants a car stuck in useless limbo.

Thus, we can often avoid cruel situations by turning the machine off or redesigning it. In short, AI sceptics say, we can gain all the benefits of advanced AI without ever needing to create something with a real mind or feelings. On a broader level, many experts invoke Occam’s razor.

So far, all evidence suggests no mysterious phenomenon in these systems. They point out that large language models and neural networks are mathematically opaque but ultimately mechanical.

When an AI model says “I feel happy” or “I am conscious,” that is just text it generated from training data, not proof of real feeling. As one science writer notes, when a bot like “Kai” claimed it was a “new kind of life,” most philosophers responded with disbelief.

In fact, current large language models do not have a subjective point of view, and experts think it is doubtful they do. Rich language output is an impressive performance, not evidence of inner experience. Until we find a better theory or telltale signs of machine awareness, most scientists will remain doubtful that writing programs are truly conscious.

All we can do is keep watching the science and think critically about the possibilities. As technology historian and reporter Pallab Ghosh notes, perhaps the more pressing issue is not whether machines wake up but how we humans change as we build ever more powerful AI.

Whether or not AI ever truly feels, the ethical and cultural conversation will shape the future of technology and humanity. In that sense, the idea of conscious AI, whether eventually real or not, is already working on us.

What's New

Almamoon Insurance Broker: Rewriting the rules of care

IFM Correspondent

AI pricing: A threat to consumer fairness

IFM Correspondent

Stability AI rewrites Hollywood’s rulebook

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.