International Finance
MagazineTechnology

Can your business trust AI?

IFM_ Artificial Intelligence
The possibility of using AI tools to monitor employees and maybe violate their privacy is a further source of concern

The quick development of generative AI tools, like Microsoft’s Copilot and OpenAI’s ChatGPT, has stoked worries that the technology may lead to several privacy and security problems, especially in the workplace.

Because Microsoft’s new Recall tool can snap images of your laptop every few seconds, privacy campaigners branded it a potential “privacy nightmare” in May 2024.

The Information Commissioner’s Office in the United Kingdom is interested in this feature and has asked Microsoft to provide further details regarding the product’s safety before it launches in its Copilot+ PCs.

A further source of concern is ChatGPT, which has shown screenshotting capabilities in its upcoming macOS software and which privacy experts warn could capture private information.

The Office of Cybersecurity determined that Microsoft’s Copilot posed a risk to users because of “the threat of leaking House data to non-House approved cloud services,” hence the United States House of Representatives has barred its use by staff members.

However, “Using Copilot for Microsoft 365 introduces the risks of sensitive data and content exposure internally and internationally,” according to market research firm Gartner.

Additionally, Google had to modify AI Overviews, a new search function, last month after screenshots showing odd and deceptive results to queries went viral.

Excessive exposure

One of the main concerns for people who use generative AI at work is the possibility of unintentionally disclosing private information. The group head of AI at risk management company GRC International Group, Camden Woollven, describes the majority of generative AI systems as, “basically big sponges. They train their language models by consuming vast volumes of information from the internet.”

According to Steve Elcock, the CEO and founder of Elementsuite, AI businesses are “eager for data to train their models” and are “apparently making it behaviourally desirable” to do so. Sensitive data may end up “into somebody else’s ecosystem” as a result of this massive data collection effort and “it might potentially be removed later with astute prodding,” according to Jeff Watkins, chief product and technology officer of digital consultant xDesign.

Additionally, there’s the risk that hackers will target artificial intelligence systems directly.

“Theoretically, an attacker might siphon off critical data, plant fake or misleading outputs, or use the AI to propagate malware if they managed to obtain access to the large language model (LLM) that runs a company’s AI tools,” Woollven explains.

AI tools for consumers carry some clear hazards. However, Phil Robinson, chief consultant at security company Prism Infosec, notes that a growing number of possible problems are emerging with “proprietary” AI products like Microsoft Copilot that are generally considered safe for use.

If access privileges are not restricted, this can potentially be exploited to view sensitive data. Workers might request access to records revealing credentials, pay scales, or M&A activities, which might later be sold or leaked.

The possibility of using AI tools to monitor employees and maybe violate their privacy is a further source of concern. “Your photos are yours; they stay locally on your PC,” and “you are always in control with the privacy you can trust,” according to Microsoft’s Recall function.

However, Elcock notes that “it doesn’t seem very long until this technology may be utilised for staff monitoring.”

Self-repression

Although there are several possible concerns associated with generative AI, companies and individual personnel should take precautions to increase security and privacy.

According to Lisa Avvocato, vice president of marketing and community at data provider Sama, the first piece of advice is to avoid entering sensitive information in a prompt for a publicly accessible tool like ChatGPT or Google.

Be general when creating a prompt to prevent oversharing. Instead of saying, “Here is my budget, prepare a proposal for expenditure on a sensitive project,” she advises asking, “Prepare a proposal template for budget expenditure. First draft with AI, then add the sensitive material you must include.”

Verify the information it offers if you utilise it for research to avoid problems like those with Google’s AI Overviews, advises Avvocato. Request citations and links to its sources from it. Even if you ask an AI to write code, you should still review it before approving it.

According to Microsoft, for Copilot to function properly, the principle of “least privilege,” which holds that users should only be granted access to the data they require, must be followed. Robinson of Prism Infosec calls this “an important factor. Organisations can’t just rely on technology and hope for the best. They need to set the foundation for these systems.”

It’s also important to remember that, unless you choose the corporate edition or turn it off in the settings, ChatGPT uses the data you share to train its models.

Enumeration of guarantees

Companies that include generative AI in their products claim to be taking all necessary precautions to ensure privacy and security. In addition to providing control over the capability under Settings > Privacy & Security > Recall & Snapshots, Microsoft is eager to discuss security and privacy aspects of its Recall product.

“Does not undermine our basic privacy protections for giving users choice and control over their data,” according to Google, which also states that generative AI in Workspace does not use user data for advertising purposes.

While enterprise versions of its solutions are available with additional controls, OpenAI reiterates how it ensures security and privacy in its products.

An OpenAI representative tells WIRED, “We take precautions to secure people’s data and privacy—and we want our AI models to learn about the world, not private individuals.”

According to OpenAI, it provides controls over the use of data, such as self-service tools for accessing, exporting, and deleting personal data and the option to refuse the usage of content for model improvement. The firm claims that ChatGPT Team, ChatGPT Enterprise, and its API are not trained on data or chats and that its models do not automatically learn from usage.

In any case, it appears that your AI colleague is a permanent fixture. The risks will only increase as these technologies become more complex and commonplace in the workplace, according to Woollven.

Multimodal AI is already starting to appear; an example is GPT-4o, which can evaluate and produce speech, video, and images. Companies now have to worry about protecting more than just text-based data.

According to Woollven, people and businesses should adopt the mentality that treats AI like any other third-party service in light of this. Share nothing that you wouldn’t want to be made public.

Present difficulties and limitations

There are several obstacles in the way of AI’s adoption in various businesses, raising concerns about both its effectiveness and its moral implications. One issue that comes up while discussing AI is known as the “black box” dilemma. In essence, this means that AI systems are often quite enigmatic.

There are situations when we don’t fully know what data an AI uses to conclude. It seems like it makes decisions behind closed doors, keeping us in the dark about what’s happening.

The transparency of AI is a significant additional worry. If there are any errors, it becomes difficult to identify the culprit. Possible explanations include errors made by the developer, problems with the data, or potential algorithmic concerns. Determining the origin of the faults becomes challenging.

AI integration can occasionally result in complex ethical and societal problems, such as employment losses and privacy concerns. Although automated systems are excellent at performing tasks that people typically perform, this may result in a reduction in the number of jobs available, which could upset social order.

Developing trust in AI

Many AI systems function as “black boxes.” It can be difficult to trust them because it’s difficult to see how they work. AI will be a lot simpler to trust if we make it more transparent so you can see how it makes decisions.

Understanding the motivation behind any decision becomes crucial when discussing sectors like banking or healthcare. We can increase the dependability of these systems and foster trust by implementing transparent decision-making.

To make sure AI systems function as intended, they must go through extensive testing and validation before being put into use. This entails checking for ethical ramifications like bias or possible misuse in addition to technical accuracy.

To keep AI systems secure and dependable as they develop, regular upgrades and inspections are essential. Consider AI systems that approve loans. It’s critical to routinely check them to ensure they don’t have any biases in favour of or against particular groups. If so, we must adjust them to maintain equity. This continuous upkeep enables AI to carry out its duties efficiently and morally.

Ensuring fair operation of AI requires establishing a strong foundation of ethics and rules. This framework must address every aspect, including data handling, privacy protection, accountability, and transparency.

An impartial authority should supervise and uphold these regulations to maintain order. By including these guidelines in the AI development process from the outset, we can guarantee that AI acts morally and protects user privacy at every turn.

While most people view AI as a replacement for human decision-making, the proper strategy is to use AI as a tool to augment human capabilities. Because AI and humans each have advantages, we can combine them to provide the best results.

Combining both AI’s efficiency and human critical thinking abilities allows us to take advantage of situations where humans evaluate AI’s outputs, provide context, or make the final decisions.

Human oversight guarantees that final decisions in scenarios where AI is employed for predictive policing or disease diagnosis take into account wider implications and ethical complexity that AI might not fully understand.

Employee education regarding the ethical applications of AI as well as its limitations might help to remove a lot of the mystique around the technology. This facilitates the intelligent and efficient usage of AI tools in a given environment.

By illustrating how AI functions and how it may truly benefit humanity rather than pose a threat, providing training that assists everyone in understanding AI can allay fears and dispel misconceptions.

Going forward, it will be critical for enterprises to develop AI trust as it becomes more integrated into our personal and professional lives. Using these instructional techniques will close the gap and guarantee that people view AI as a positive development rather than a cause for concern.

What's New

Ethio Telecom-Huawei alliance boosts financial inclusion

IFM Correspondent

Jobs’ vision: How the iPhone changed everything

IFM Correspondent

AI’s energy demands spark renewable race

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.