So Apple has limited the use of OpenAI’s ChatGPT, and Microsoft’s Copilot. reports. Bloomberg’s Mark Gurman says that ChatGPT was on the list of banned software for several months.
Not only Apple but also Samsung, Verizon, and the who’s-who of banks, (Bank of America Citi, Deutsche Bank Goldman Sachs, Wells Fargo and JPMorgan). ChatGPT’s policy states that can be used for training its models without your consent. In March, a bug revealed the data of other ChatGPT users.
In what world would Disney want to leak Marvel spoilers?
I tend to see these bans more as a loud warning.
is a technology that can be used to improve customer service. Companies use this technology in order to reduce costs. For customer service to be effective, customers must give their personal information, which can sometimes be private or sensitive. How will companies secure their bots for customer service?
This is not just a customer service issue. Imagine Disney decided to use AI () instead of VFX departments, to write Marvel movies. Disney could be tempted to leak Marvel spoilers.
in general is true that young companies, like an early version of facebook for example, don’t give much attention to data protection. In this case, it’s a good idea to limit the exposure of sensitive material, as OpenAI suggests. It’s not an AI problem.
These large, sophisticated, and secrecy-focused firms may be paranoid.
AI chatbots intrinsic problems? I’m curious. Compute is one of the costs associated with AI. Cloud computing is cheaper than building your own data center, and your data will be processed by a remote server. It’s easy to see why banks are concerned. Financial data is extremely sensitive.
In addition to accidental leaks of information, corporate espionage is also possible. On the surface, it seems like this is more of a problem for the tech industry. After all, theft of trade secrets is one of these risks. Big Tech companies have moved into streaming. I wonder if this isn’t a problem on the creative side of things as well.
When it comes to technology, there’s a constant tug-of-war between privacy and utility. Users have often exchanged privacy for free services, such as Google and Facebook. Google’s bard explicitly states that queries are used to “improve and improve Google products, services, machine-learning technologies, and Google products.”
These large, sophisticated, and secrecy-oriented companies may be paranoid, but there is nothing to worry about. Let’s assume they are right. If that’s the case, I can imagine a few scenarios for AI chatbots in the future. First, the AI wave could be just like the metaverse. is a non-starter. Second, AI firms are being pressured to overhaul by defining security practices and rewriting . Third, every company who wants to use AI must build their own proprietary model, or at least run their own processing. This sounds ridiculously expensive and difficult to scale. The fourth problem is a privacy nightmare online, in which your airline (or credit collectors or pharmacy) regularly leaks your personal data.
I’m not sure how this will turn out. If the most security-conscious companies are locking down AI usage, then there may be a good reason for us to do the same.