An update on deepfake video threats

What has happened in the world of deepfake videos? Since I wrote about the creation and weaponization of them back in October 2020 for Avast’s blog, there have been a number of virtual conferences and new algorithms that have been developed to create these odd pieces of media. There is surprisingly a very bimodal consensus: either the sky is falling and we are all about to be subjects of revenge porn and various misinformation campaigns; or that things haven’t (yet) gotten out of hand and the tech is still in early stages. I will let you be the judge, but will give you a few places that you can start your own research.

Chicken Little (2005) | MovieWebOne blog post that I read on the ethics of “synthetic media” (that is what the people who write the deepfake algorithms call their work product to make it sound more legitimate) compared the deepfake world with the introduction of the Kodak camera back 130 years ago. Back then, folks were worried about image manipulation by newbie photographers, and whether we could use photos to show anything other than the literal, “real” state of the world. The chicken little scenarios didn’t materialize, and now we all walk around with digital cameras that carry multiple lenses and built-in effect filters that previously were only found on the higher-end pro gear.

Still, there is no doubt that the tech will get better: check out this timeline from one of the deepfake scanning vendors that claims “the technology was developed so fast that now bad actors can create realistic synthetic videos easily.” That perspective was reinforced with this report earlier this summer from Threatpost, which warned that a “drastic uptick in deepfake tech is happening.” There are plenty of deepfake algorithms out there, as Shelly Palmer recently cataloged.

Hold on. Yes, the tech has been developing quickly, thanks to some amazing AI that can deploy huge computing power. But the fakes aren’t really at the point to start wars or create bank panics. Instead, we have seen numerous cyberattacks that make use of synthetic voice recordings (think your boss leaving you a voicemail saying to make a particular payment to a hacker), according to presenters at a June conference.

And many predicted deepfake disasters haven’t really materialized. A celebrated case of a deepfake cyberbullying mom who sent videos to the cheer squad and coach of her daughter’s team turned out to be based on more mundane image manipulation.This could be a wake-up call to have better cyberbullying laws and how to prove these cases too.

I stand with the skeptics (are you really surprised) and suggest you proceed with caution. No doubt as the tech improves the threats will quickly follow, and perhaps we’ll see that happening in 2022. Don’t yet hit the panic button, but instead prepare yourself for potential attacks that could compromise facial and voice ID security measures.

2 thoughts on “An update on deepfake video threats

  1. Pingback: David Strom: An update on deepfake video threats | ResearchBuzz: Firehose

  2. Pingback: Michigan Music, Alexa, LinkedIn, More: Monday Afternoon ResearchBuzz, December 13, 2021 – ResearchBuzz

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.