OpenAI has revealed new details regarding why it shut down ChatGPT on Monday. It now says that payment information of some users may have been compromised during the incident.
According to , an post, a bug within redis-py, an open source library, created a cache issue that could have allowed some users to see the expiration date and the last four digits of another credit card. It also showed their first and last names, email addresses, and payment address. Other chat histories may also have been visible to users.
It’s not the first time that caching problems have led users to see other people’s data. In 2015, Steam users received pages containing information from other users. OpenAI is very careful about figuring out security and safety implications of its AI. However, it was hampered by a well-known security problem.
According to ChatGPT Plus, the payment information leak could have affected approximately 1.2 percent of ChatGPT Plus users who used ChatGPT Plus between 4AM ET and 1PM ET on March 20, 2019.
Only if you were on the app at the time of the incident were you affected.
OpenAI explains that there are two possible scenarios that could have caused payment data to be displayed to an unauthorized user. OpenAI suggests that if a user visited the My account > Manage subscription screen during this timeframe, they might have seen information about another ChatGPT Plus user at the time. According to the company, some subscription confirmation emails were sent to the wrong person during the incident. These include the last four digits from a user’s Credit Card number.
Although it is possible that both of these events occurred before the 20th, OpenAI doesn’t have any proof. OpenAI reached out to any users whose payment information was exposed.
How this happened? It seems it all came down to caching. Although the company provides a detailed technical explanation in their post, the bottom line is that Redis uses software to cache user information. In certain situations, a Redis request that was canceled would cause corrupted data to be returned for another request. This shouldn’t happen. The app would usually get the data and then throw an error.
If the other person was looking for the exact same data, such as to load their account page or to access information from another account, the app decided that everything was fine and presented it to them.
This is why users were seeing chat history and payment information of other users; they were being served cache data which was supposed to be sent to another user but wasn’t due to a canceled request. It only affected active users. Users who didn’t use the app would not have their data stored.
The worst part was that OpenAI accidentally made a change in its server on March 20th. This caused an increase in Redis requests that were canceled. This increased the chances that the bug would return an unrelated cache.
OpenAI claims that the bug that appeared in Redis version 1.0 has been fixed. It also states that it is making changes to its software and practices to ensure that this doesn’t happen again. This includes adding redundant checks to verify that the data being served belongs to the person who requested it. Also, it will reduce the chance that its Redis cluster might produce errors when it is under heavy loads.
Although I would argue that these checks should have been present in the first instance, it is a good thing that OpenAI now has them. Although open source software is vital for the modern web it also has its challenges. Because anyone can use it bugs can affect many services and companies . A malicious actor can target specific software in order to knowingly exploit it. There are checks to make it harder but companies such as Google have demonstrated that it is best to work to ensure it doesn’t happen.