A group of tech ethics focused on artificial intelligence has asked the Federal Trade Commission for an investigation into OpenAI’s violation of consumer protection rules. They claim that OpenAI’s rollout AI text generation tools was biased, deceptive and a danger to public safety.
After the publication of a highly-publicized open letter calling for an end to large-scale generative AI experiments, the Center for AI and Digital Policy has filed its complaint. Marc Rotenberg, president of CAIDP, was among the signatories of the letter along with a number AI researchers and OpenAI cofounder Elon Musk. The complaint is similar to the one above. It calls for stricter government oversight and slowing down the development of generative AI models.
CAIDP complaint identifies potential dangers from OpenAI’s GPT-4 text model. It was first announced in mid-March. These include malicious code, highly targeted propaganda, and ways that GPT-4 could lead to biased training data that could be used to create stereotypes and unfair gender and race preferences for things such as hiring. It also highlights significant privacy flaws in OpenAI’s product interface, such as a bug that exposed OpenAI ChatGPT history and payment details to other users.
“OpenAI released GPT-4 for commercial use in full knowledge of the risks.
OpenAI has acknowledged potential dangers from AI text generation. However, CAIDP claims that GPT-4 is a breach of the law and should be subject to regulatory action. CAIDP seeks to hold OpenAI responsible for violating Section 5 FTC Act. This section prohibits unfair or deceptive trade practices. The complaint states that OpenAI released GPT-4 for commercial use in full knowledge of the risks, including possible bias and harmful behavior. It also describes AI hallucinations as deception, which is the phenomenon where generative models make up untrue facts confidently. It warns that ChatGPT will encourage deceptive commercial statements, advertising, and could bring it under the FTC’s jurisdiction.
CAIDP requests that the FTC stop commercial GPT model deployments and demand independent assessments before any future rollouts. CAIDP also requests a publicly available reporting tool, similar to that which allows consumers to file complaints about fraud. It also requests that the FTC establish rules regarding its rules for generative AI systems. This is based on ongoing, but still very informal, research and evaluations of AI tools.
CAIDP noted that the FTC expressed an interest in regulating AI tools. In recent years, it has warned that biased AI systems could lead to enforcement action. Lina Khan, FTC Chair, stated this week that the agency would be looking for signs of large, established tech companies trying not to compete. An investigation into OpenAI, one of the most prominent players in the generative AI arms racing — would be a significant escalation in its efforts.