Apple restricts its employees’ use of AI tools such as OpenAI’s ChatGPT for fear that confidential information contained in these systems could be collected or leaked.
A report by The Wall Street Journal states that Apple employees were also warned not to use GitHub’s AI-programming assistant Copilot. Bloomberg Mark Gurman Tweeted ChatGPT was on Apple’s restricted software list “for months.”
Apple has every reason to be cautious. OpenAI automatically stores all conversations between users and ChatGPT. These conversations can be used to train OpenAI systems, and they can also be checked by moderators in order to determine if the terms of service have been violated.
OpenAI introduced a new feature in April that allows users to disable chat history. This was not long after several EU nations began investigating this tool for possible privacy violations. However, even with this option enabled, OpenAI retains all conversations for a period of 30 days, with the option to “review them for abuse” before permanently deleting them.
Apple is probably right to be concerned that its employees may enter confidential information into the system, given the usefulness of ChatGPT in tasks such as improving code or brainstorming ideas. One of OpenAI’s moderators could then see this information. Researchers have found that it is possible to to extract training data from certain language models by using the chat interface. However, there’s not evidence to suggest that ChatGPT is susceptible to such attacks.
Apple is not the only company to implement such a restriction. JP Morgan Verizon and Amazon are also included.
Apple’s decision to ban the app is noteworthy, however, as OpenAI released an iOS App for ChatGPT just this week. The app supports voice input and is free. It is only available in the US. OpenAI has announced that it will launch the app in more countries, as well as an Android version.