One video shows a news anchor sporting perfectly combed dark hair, and a thick beard. He outlines the shameful failure of the United States to take action against gun violence.
Another video features a female news anchor praising China’s role as a geopolitical partner at an international summit meeting.
However, something was not right. Their voices were strained and did not match the movements of their mouths. Their faces were pixelated and had a video-game look. Also, their hair looked unnaturally glued to the head. There were many grammatical errors in the captions.
Two broadcasters who are purported to be anchors for Wolf News are not real people. These avatars were created using artificial intelligence software and are computer-generated. Late last year, they were shared by pro-China accounts on Twitter. This was the first instance of “deepfake” video technology being used for creating fictitious individuals as part of an information campaign.
Jack Stubbs (Vice President of Intelligence at Graphika), a research company that studies disinformation, said, “This is the first-time we’ve seen it in the wild.” Graphika found the pro-China campaign. This appeared to be intended to promote the interests and undermine the United States for English-speaking audiences.
The “Deepfake” technology has been steadily improving for almost a decade and now allows you to create digital puppets. The A.I. Software can sometimes be used to deform public figures. A video circulated last year on social media falsely showed Volodymyr Zelensky (the president of Ukraine), announcing his surrender. The software can create characters from whole cloth. This goes beyond the traditional editing tools used by Hollywood and blurs the line between factual and fictional.
The spread of misinformation and falsehoods
- Deepfakes Misinformation peddlers and Meme-makers are using artificial intelligence tools to make convincing fake videos for a cheap.
- Cutting Back : Job cuts in the social media sector reflect a trend that threatens to undo many safeguards that have been put in place by platforms to prevent or reduce disinformation.
- A key case: The outcome of a federal Court battle could determine whether the First Amendment is an obstacle to government efforts to suppress disinformation.
- The Top Misinformation Spreader: A large study revealed that Steve Bannon’s “War Room” podcast contained more falsehoods than other political talk programs.
Disinformation experts warn that fake videos can further reduce people’s ability to distinguish reality from online forgeries. They could also be misused to incite unrest or create a political scandal. These predictions are now a reality.
Although deepfakes were used in the pro-China disinformation campaign proved to be a mistake, it is a significant step forward in information warfare. Another video utilizing similar A.I. was discovered in recent weeks. Technology was found online showing fictional people claiming to be Americans and encouraging support for Burkina Faso’s government, which is under scrutiny for its links with Russia.
A.I. Software that can be bought online can create videos in minutes. Subscriptions start at just a few bucks per month, Mr. Stubbs stated. It’s easier to create content on a large scale.
Five-year-old software company makes deepfake avatars. The customer needs to simply write a script that is then read by Synthesia’s digital actors.
One A.I. character, George, looks like a veteran executive with gray hair. He wears a blue jacket and a collared shirt. Helia is another avatar who wears a hijab. Carlo, another avatar wears a hard cap. Samuel is wearing a lab coat similar to the one worn by doctors. Synthesia can be used by customers to create avatars that are based on them or others who have given permission.
Customers use the software mostly for training videos and human resources. The software’s quality is not perfect. It costs only $30 per month and produces videos in minutes. This is a significant improvement on the time it takes to produce videos.
Synthesia stated that the entire process can be done “as simple as sending an email,” on its website.
What a character looks like in real life
These are some examples of Synthesia’s A.I.-generated characters being used in marketing and similar campaigns.
Victor Riparbelli is Synthesia’s founder and chief executive. He said that those who used the company’s technology to create avatars discovered by Graphika were in violation of its terms. These terms prohibit the use of the company’s technology for “political sexual, criminal, and discriminatory content.” Although Mr. Riparbelli refused to reveal the identities of the Wolf News video creators, he stated that their accounts were suspended.
Mr. Riparbelli said that Synthesia has four people on its team to prevent deepfake technology being used to create illegal content. However, misinformation and material that does not contain explicit words or imagery, hate speech, slurs and explicit words could be difficult to find.
After being shown one Wolf News video, he stated that “it’s very difficult for me to determine whether this is misinformation.” He stated that he takes “full responsibility” for any actions on the platform and asked policymakers to create more clear rules regarding how A.I. should be used. These tools could be utilized.
Riparbelli stated that it will be more difficult to identify disinformation. He said that eventually, deepfake technology would be sophisticated enough to allow one to create a Hollywood movie on a laptop.
Graphika connected Synthesia with the pro-China disinformation operation by linking the two Wolf News avatars to innocuous training videos online that featured the same characters. Synthesia referred to the two avatars as “Anna” or “Jason” on its website.
The same avatar generated by A.I. appeared in both marketing and disinformation campaigns.
Synthesia’s software has been used to create a script for the avatars. It is easy to see that something is wrong with the characters’ pixelated faces or robotic voices.
Anna made an appearance in the video supporting Burkina Faso’s new government. She spoke in monotone, “Let’s all stay mobilized behind Burkinabe people to this common struggle.” “Homeland, death, we will overcome.”
For years, Deepfake videos are common. Last year, Kendrick Lamar used this technology to transform into Kanye West and Will Smith in a music video. Pornography websites have been criticized for showing videos that used the technology to illicitly copy famous actresses.
A.I. is a popular tool in China. Deepfake tools have been developed by companies for over five years. In 2017, iFlytek, a Chinese company, made fake video of Donald J. Trump speaking Mandarin. This was a publicity stunt at a conference. IFlytek was added to a U.S. Blacklist, which restricts the sale of American-made tech for national security reasons.
After being contacted By The New York Times, Meta, the owner and operator of Facebook, Instagram, and WhatsApp, stated that it had deleted at most one account associated with the pro-China deepfake video accounts. The company declined to comment further. It does not allow video or other media to be manipulated with intent of misleading. Twitter declined to comment.
Graphika claimed it found the fake videos by following accounts on social media that were linked to a pro China misinformation campaign called “spamouflage” and used them to spread the content to other accounts.
Researchers claimed that the impact of deepfake technology was more noticeable than the actual videos’ effect, which were not widely seen. According to Graphika, the videos with Wolf News anchors were shared at least five times by five accounts between Nov. 22-Nov. 30, according to Graphika. These posts were then shared by at least two additional accounts. It appeared that they were part of a pro China network.
Mr. Stubbs stated that disinformation peddlers would continue to experiment with A.I. software that produces convincing media that is difficult to verify and detect.