A redhead friend
A redhead friend
Zao’s viral moment was quickly connected with the idea that US politicians are vulnerable to deepfakes, video or audio fabricated using artificial intelligence to show a person doing or saying something they did not do or say. That threat has been promoted by US lawmakers themselves, including at a recent House Intelligence Committee hearing on deepfakes. The technology is listed at the top of eight disinformation threats to the 2020 campaign in a report published Tuesday by NYU.
Yet some people tracking the impacts of deepfakes say it’s not big-name US politicians that have most to fear. Rather than changing the fate of nations by felling national politicians, they say, the technology is more likely to become a small-scale weapon used to extend online harassment and bullying.
One reason: US public figures like presidential candidates take—and deflect—a lot of public flak already. They’re subject to constant scrutiny from political rivals and media organizations, and have well-established means to get out their own messages.
“These videos are not going to cause a total meltdown,” says Henry Ajder, who works on tracking deepfakes in the wild at Deeptrace, a startup working on technology to detect such clips.“People like this have significant means of providing provenance on images and video.”
Ajder says there’s a “good chance” deepfakes appear involving 2020 candidates. But he expects them to be an extension of the memes and trolling that originate in the danker corners of candidates’ online fanbases, not something that jolts the race to the White House onto a new trajectory.
" What sort of men would think it is acceptable to subject a young girl to this level of brutality and violence? an attack like this in ourcommunities and we must all work together. "
The group was more concerned about deepfakes amplifying local harassment than altering national politics. Journalists and activists working on human rights issues such as police brutality and gay rights already face disinformation campaigns and harassement on platforms like WhatsApp, sometimes using sexual imagery, Gregory says. What little is known about deepfakes in the wild so far supports the idea that kind of harassment will be the first major negative impact of the technology.
[gs-fb-comments]