International Finance
Featured Technology

Find coding flaws, earn cash: ChatGPT’s new offer for cybersecurity researchers

IFM_ChatGPT
In March 2023, OpenAI admitted that some users' payment information may have been exposed when it took ChatGPT offline owing to a bug

Microsoft-backed and Sam Altman-led AI venture OpenAI, the developer of ChatGPT, is now offering up to USD 20,000 to security researchers to help the company distinguish between good-faith hacking and malicious attacks, as it suffered a security breach in March 2023.

OpenAI has launched a bug bounty programme for ChatGPT and other products, saying the initial priority rating for most findings will use the ‘Bugcrowd Vulnerability Rating Taxonomy’.

“Our rewards range from $200 for low-severity findings to up to USD 20,000 for exceptional discoveries. However, vulnerability priority and reward may be modified based on likelihood or impact at OpenAI’s sole discretion. In cases of downgraded issues, researchers will receive a detailed explanation,” the AI research company said.

The security researchers, however, are not reportedly authorised to conduct security testing on plugins created by other people.

OpenAI is also asking ethical hackers to safeguard its confidential corporate information that may be exposed through third parties.

Some examples in this category include Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe.

“You are not authorised to perform additional security testing against these companies. Testing is limited to looking for confidential OpenAI information while following all laws and applicable terms of service. These companies are examples, and OpenAI does not necessarily do business with them,” informed the company.

In March 2023, OpenAI admitted that some users’ payment information may have been exposed when it took ChatGPT offline owing to a bug.

According to the company, it took ChatGPT offline due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history.

OpenAI discovered that the same bug may have caused the unintentional visibility of “payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window”.

It found a Redis client open-source library bug behind the ChatGPT outage and data leak, where users saw other users’ personal information and chat queries.

ChatGPT displays a history of historical queries the users made in the sidebar, allowing the latter to click on one and regenerate a response from the chatbot.

During the last week of March 2023, numerous ChatGPT users reportedly saw other people’s chat queries listed in their history.

“The bug was discovered in the Redis client open-source library, redis-py. As soon as we identified the bug, we reached out to the Redis maintainers with a patch to resolve the issue,” the company said after carrying out an analysis of the whole fiasco.

This exposed information included subscribers’ names, email addresses, payment addresses, and the last four digits of their credit card numbers and expiration dates.

“Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window,” OpenAI report stated.

“In the hours before we took ChatGPT offline on Monday (March 20), it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time,” The company remarked further.

OpenAI CEO Sam Altman too apologized for the leaks.

“We had a significant issue in ChatGPT due to a bug in an open-source library, for which a fix has now been released and we have just finished validating. a small percentage of users were able to see the titles of other users’ conversation history,” he stated in a tweet.

Then in another incident, Samsung employees got into trouble after they reportedly leaked sensitive confidential company information to ChatGPT on at least three occasions.

As per Korean media reports, a Samsung employee copied the source code from a faulty semiconductor database into ChatGPT and asked it to help them find a fix. Then in another incident, another staffer shared confidential code to try and find a fix for defective equipment, followed by another individual reportedly submitting an entire meeting to the chatbot and asking it to create meeting minutes.

Following these incidents, Samsung put in place an “emergency measure” limiting each employee’s prompt to ChatGPT to 1024 bytes. All these leaks happened three weeks after Samsung lifted a previous ban on its employees using ChatGPT. The incidents have forced the tech giant now to develop its own in-house AI.

What's New

Hejaz Group: Pioneer of Islamic Finance in Australia

Start-up of the Week: BLK Global revolutionises raw material trade

IFM Correspondent

Asia Private Equity & Bitcoin ETFs: Morgan Stanley’s focus amid leadership shift

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.