Deepfakes find new ways to spread misinformation
Last week, California took special measures against a new internet and digital phenomenon called "deepfakes."
The technology, a form of synthetic image manipulation, has been used for an assortment of humor and daily charm. I've personally seen a dozen deep-faked videos, where Nicholas Cage's face is artistically rendered onto different movies.
It's hilarious to consider Cage playing Morpheus from “The Matrix.” But the advancing use of this kind of technology gives rise to concerns, primarily in how people can use this for dangerous forms of impersonation or misinformation.
There's an increasingly worrisome range of impact these deepfakes are making for the reliability of information in the global age. MIT Technology Review points toward a particular incident in late 2018 where the president of Gabon, Ali Bongo Ondimba, had his likeness replicated and gave a fake national address.
The national confusion it caused is among one of many incidents where the technology is used in a means where it transgresses anything charming.
Gov. Gavin Newsom (D-Calif.), signed AB 730 into law, which will effectively ban or criminalize the usage of deepfake technology. The law sheds light on recent developments for how technology can be regulated and what exactly poses an issue for the public.
The bill, while not actually using the word "deepfakes," definitely focused on the general presence of "doctored works." Newsom also pointed out the insidious nature of some of these forms of image-altering technology.
"While political deepfakes have generated headlines, at least one study recently found that the majority of deepfakes are pornographic," according to The Verge. The onslaught of non-consensual porn, using celebrities or stranger's faces as a modification, brings to light how these forms of image technology can lead to social and sexual legal consequences.
The startling statistics for these non-consensual forms of pornography rack up to 95% of what forms of rendered images are out there, according to Vice Media. There are also incidents where deepfakes are used to replicate particular dissidents or individuals in political activism.
The advancement of synthetic imaging technology makes those within human rights organizations as well as misinformation watch networks quite uneasy. The technology, particularly the audio generation, has been become more accurate, making fakes much more convincing.
There's also the element that both the open source code and accessibility of these applications can become more available to the general public. This would include, of course, internet scammers or political agents with the intent of uploading images or videos completely set to hurt the lives of others.
Blackmail, as MIT Technology Review noted, has become a slowly building concern within human rights communities.
These questions are funneling into a growing concern, the dissolving faith in online media, what can be trusted as a reliable source and the corruption of a means to communicate and have one's own pictures out in the world today.
With deepfakes running along with the rise of fake news, these discussions lead to an ongoing issue as to how mainstream or digital media are clashing with new technologies, and how reliability and trust can come into conflict within the crossfire.
It's always discouraging to see the rise of immoral or downright devious uses for new technology, especially with all the light-hearted fun that can come from the comedic videos.
Hopefully, as time passes, the way everyone can approach these mediums will be more focused on a playful tone. For now, I'll just keep it in mind, and focus on Cage acting as the entire cast of “Titanic.”
Comments powered by Disqus
Please note All comments are eligible for publication in The Daily Targum.