International Finance
MagazineTechnology

Can AI replace education? The choice is ours

Can AI replace education
Despite the hype, artificial intelligence cannot act or think for itself

One question that looms as commencement ceremonies honour the promise of a new generation of graduates is whether artificial intelligence (AI) will render their education useless. Many CEOs believe that AI will. They paint a picture of a time when teachers, engineers, and physicians will be replaced by AI.

According to a recent prediction by Mark Zuckerberg, CEO of Meta, mid-level engineers who write the company’s computer code will be replaced by AI. Even coding itself has been deemed outdated by Jensen Huang of NVIDIA.

Bill Gates acknowledges that the rapid advancement of AI is “profound and even a little bit scary,” but he also applauds the potential for it to make elite knowledge widely available. In his ideal world, AI will provide free, excellent medical advice and tutoring in place of programmers, physicians, and educators.

Despite the hype, artificial intelligence cannot act or “think” for itself at this time. In fact, the key question that determines whether AI improves learning or degrades comprehension is whether we should let AI simply forecast patterns or demand that it explain, justify, and be rooted in the rules of our world.

Artificial intelligence requires human judgment not only to oversee its output but also to incorporate scientific boundaries that provide it with guidance, stability, and interpretability. Physicist Alan Sokal recently likened AI chatbots to an oral exam taken by a mediocre student.

According to Sokal, a user may not pick up on an inaccurate chatbot unless they are extremely knowledgeable about a particular topic. That, in his opinion, sums up AI’s purported “knowledge” quite nicely. However, by anticipating word sequences, it simulates comprehension.

This makes it difficult for “creative” AI systems to tell what is real and what isn’t, and there are arguments over whether big language models can understand cultural nuances. Teachers concerned about AI tutors might impair students’ critical thinking skills, and doctors worried about algorithmic misdiagnosis are highlighting the same issue. Machine learning excels at identifying patterns, but it lacks the deep understanding that arises from systematic, cumulative human experience and the scientific method.

Here, a burgeoning AI movement provides a way forward. It focuses on directly integrating human knowledge into machine learning processes. MINNs (Mechanistically Informed Neural Networks) and PINNs (Physics-Informed Neural Networks) are two variants. The concept is straightforward, even though the names sound technical. Artificial intelligence improves when it complies with the laws of physics, biological systems, or social dynamics. Thus, people are still needed to produce knowledge as well as use it. AI functions best when it can learn.

An algorithm is programmed to adhere to accepted scientific principles rather than relying on historical data to make educated guesses about what works. Consider a family-run lavender farm in Indiana. Blooming time is crucial for businesses of this nature. Early or late harvesting weakens the potency of essential oils, lowering quality and profitability. It could be a waste of time for an AI to search through pointless patterns. But a MINN begins with the biology of plants. It makes accurate predictions promptly and with financial significance by using equations that relate blooming to heat, light, frost, and water. However, it only functions when it understands the workings of the chemical, biological, and physical worlds. Humans create science, which is the source of that knowledge.

Consider using this method to detect cancer: breast tumours produce heat due to increased blood flow and metabolism, allowing predictive artificial intelligence to identify tumours from thousands of thermal images using only patterns in the data. In contrast, a MINN, such as the one that was recently created by researchers at the Rochester Institute of Technology, incorporates bioheat transfer laws straight into the model using body-surface temperature data.

So, rather than speculating, it knows how heat flows through the body and can use the physics of heat flow through tissue to determine what’s wrong, what’s causing it, why, and exactly where it is. Based solely on the way cancer alters the body’s heat signature, a MINN was able to predict the location and size of a tumour in one instance within a few millimetres.

The message is very clear. People are still necessary. Our role is not going away as AI advances in sophistication. It is changing. An algorithm that generates strange, biased, or incorrect results must be called out by humans. That is not solely an AI shortcoming. It is the greatest human strength.

The increasing sophistication of AI is not the true danger. That is, we might cease applying our intelligence. Treating AI like an oracle runs the risk of making us lose our ability to think critically, ask questions, and spot illogical behaviour. Thankfully, this does not have to be how things turn out in the future.

Erik Otarola-Castillo, an associate professor of anthropology at Purdue University, said, “We can create systems that are open, comprehensible, and based on the body of human knowledge regarding ethics, culture, and science. Interpretable AI research can be funded by policymakers. Students who combine technical skills with domain knowledge can be trained by universities. Frameworks like MINNs and PINNs, which demand that models remain true to reality, are available for developers to use. Additionally, we as users, voters, and citizens have the power to insist that AI support science and objective truth rather than merely correlations.”

“After teaching scientific modelling and statistics at the university level for over ten years, I now concentrate on teaching students how algorithms function under the hood by learning the systems themselves rather than memorising them. Raising literacy in the related fields of science, math, and coding is the aim,” Erik added.

Today, this method is required. More people clicking “generate” on black-box models is not necessary. There is a need for people who can decipher the logic, code, and math of the AI and recognise its inaccuracies.

“Artificial intelligence won’t replace people or render education obsolete. However, if we lose the ability to think for ourselves and understand the importance of science and in-depth knowledge, we may replace ourselves. The decision is not about accepting or rejecting AI. It’s whether we’ll continue to be knowledgeable and astute enough to steer it,” Erik noted.

Meanwhile, the introduction of a Product Support Agent and the general release of Data Insights Agent are two recent developments in Adobe’s agentic AI tools. Enhancing troubleshooting and insight generation in Adobe Experience Platform applications is the goal of these tools. Marketers and customer experience specialists now have an interactive way to identify and fix problems in their marketing workflows thanks to the recently created Product Support Agent AI assistant.

Operational bottlenecks that frequently divert teams from strategic initiatives are addressed by the tool, according to the company. The agent helps by offering real-time direction, making it easier to create support cases, and allowing continuous case management through the AI assistant conversational interface.

What's New

Almamoon Insurance Broker: Rewriting the rules of care

IFM Correspondent

AI pricing: A threat to consumer fairness

IFM Correspondent

Stability AI rewrites Hollywood’s rulebook

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.