The Meta security team has confirmed that fake ChatGPT is widely used to hack into user accounts and hijack business pages.
Meta’s new security report reveals that spammers and malware operators are following popular trends and topics that attract attention. The biggest trend in tech right now is AI bots, like ChatGPT and Bing. So tricking users to try a fake version — sorry, crypto , is now fashionable.
Since March, Meta security analysts found 10 different forms of malware masquerading as AI-chatbots like ChatGPT. These malware are available as browser extensions ( classic ) and toolbars. Some even appear on unnamed official websites. Washington Post reported last week that these fake ChatGPT scams are spreading through Facebook ads.
Some of these ChatGPT malicious tools have AI integrated to make it appear as though they are a genuine chatbot. Meta then blocked over 1,000 links to the malware variants that were shared on its platforms. The company has also given the technical background of how scammers access accounts. This includes hijacking logged in sessions and maintaining access – a similar method to that brought down Linus Tech Tip.
Meta offers an updated support flow for businesses that have been hacked or closed down on Facebook. This will help them fix the problem and regain control. Hackers target business pages because Facebook users who have access to them are targeted by malware.
Meta has begun deploying Meta Work Accounts to support the existing and more secure single sign-on services (SSOs) from organizations who don’t use a Facebook personal account. After a business account has been migrated, it is hoped that malware such as the bizarro chatGPT will be more difficult to attack.