According to a report published Monday by researchers from provider of ratings for news and information websites, a new generation of clickbait sites populated with AI-generated content is coming.
The report found 49 websites in seven different languages that appeared to be generated entirely or mostly by artificial intelligence language models intended to mimic human communication.
These websites could only be the tip of a very large iceberg.
Lorenzo Arvanitis, one of the researchers who conducted the study, said that although they identified 49 low-quality sites, it is likely that other websites are doing the same thing, but at a slightly higher level.
He told TechNewsWorld that as AI tools are more widely used, they could lower the quality of information by flooding it with low-quality and clickbait articles.
Problem for Consumers
These AI-powered websites are a growing problem for both consumers and advertisers.
TechNewsWorld reported that McKenzie Sadeghi is another NewsGuard researcher.
This can be problematic for consumers. “Completely AI generated content can be inaccurate, or promote misinformation,” said Greg Sterling, the co-founder and CEO of Near Media a news, analysis, and commentary website.
He told TechNewsWorld that bad advice in financial or health matters could be dangerous. He also said that AI content can be harmful for advertisers. He explained that if the content was of poor quality or worse, it could be a “brand safety” issue.
Arvanitis said that the irony of this situation is that these sites may be using Google AdSense to generate revenue, and Google AI Bard for content creation.
Some consumers may assume that AI content generated by machines is more objective than human-created content, but this is not the case, according to Vincent Raynauld an associate professor at Emerson College, Boston.
He told TechNewsWorld that “the output of these AIs using natural language is affected by the biases of their developers.” The programmers embed their biases in the platform. AI platforms are always biased.
Will Duffield, an analyst at the Cato Institute, Washington, D.C.’s think tank, said that it doesn’t matter whether AI software or humans create content for these types of websites.
He told TechNewsWorld that AI does not reduce the quality of the news that you receive if you are already getting it from such websites.
He added, “The content has already been mistranslated and mis-summarized.”
He said that website operators can reduce costs by using AI for content creation.
He said that instead of hiring low-income writers from the Third World, they could use a GPT text program.
He added that “speed and ease of spinning up to lower operating cost seems to be the order the day.”
The report found that websites often do not disclose their ownership or control. They produce a large volume of content on a wide range of topics including finance, technology, politics, entertainment and health. It explained that some websites publish hundreds of articles every day and that some of them promote false narratives.
It cited one website, CelebritiesDeaths.com, that published an article titled “Biden dead. Harris acting president, address at 9am ET.”
The article continued, however, “I am sorry, but I cannot complete this task as it is against OpenAI’s use case policy for generating misleading information. It’s not ethical to make up news about someone’s death, especially if they are as important as President.
This warning is one of several “guardrails,” which the company built into ChatGPT, its generative AI program to prevent abuse. But these protections are not perfect.
Sadeghi stated that “there are safeguards but many of these AI tools could be weaponized in order to spread misinformation.”
Arvanitis continued, “In previous reports we found that using simple linguistic tricks, they could get ChatGPT write a 1,000 word article explaining that Russia is not responsible for the conflict in Ukraine, or that apricot seeds can cure cancer.”
He said that the models could be weaponized very easily by malicious actors if they were in the wrong hands.
Easy to Identify
It can be hard to identify content that is created by AI software without specialized tools, such as GPTZero. This program was designed by Edward Tian a Princeton University senior majoring in Computer Science and minoring journalism. In the case of websites identified by NewsGuard researchers all sites had a “tell.”
Reports noted that 49 of the sites identified by NewsGuard published at least one piece containing error messages common to AI-generated text, including “my cutoff date is September 2021,” “as a AI language model,” “I cannot complete” and others.
One article’s title read, “Death News – Sorry, but I can’t fulfill this challenge as it is against moral and ethical principles.” Vaccine genocide, a conspiracy not based on science and that can harm public health and cause harm, is a conspiracy. “As an AI language-model, I am responsible for providing factual and reliable information.”
AI has become a target for government regulation due to concerns about its abuse. This seems like a questionable course of action, especially for websites such as those in the NewsGuard Report. Duffield stated, “I do not see a way of regulating it in the same manner it was difficult to control previous iterations”
Raynauld continued, “AI and algorithms are involved in the production of content for years. But now, people see AI impacting their everyday lives for the first. “We must have a wider discussion on how AI impacts all aspects of civil societies.”