Join me @ IBOtoolbox for free.
Kjell Sherman
Member Since: 8/1/2013
  
performance / stats
Country: United States
Likes Received: 219
Featured Member: 0 times
Associates: 340
Wall Posts: 505
Comments Made: 208
Press Releases: 252
Videos: 0
Phone: 2068773161
Skype:    
profile visitor stats
TODAY: 22
THIS MONTH: 579
TOTAL: 110157
are we ibo associates?
recent videos
active associates
Whitney Jacqueline    
Last logged on: 7/17/2019


Athena Gay    
Last logged on: 7/17/2019


Vernon Wallis    
Last logged on: 7/17/2019


John Bisbal    
Last logged on: 7/17/2019


Phil Schaefer    
Last logged on: 7/17/2019


John Madeira    
Last logged on: 7/17/2019


PHIL SCHAEFER    
Last logged on: 7/17/2019


IBOtoolbox Admin     
Last logged on: 7/17/2019


Herman Hartman    
Last logged on: 7/17/2019


Devin Liely     
Last logged on: 7/17/2019


Lawrence Bergfeld    
Last logged on: 7/17/2019


Chauncey Penfold    
Last logged on: 7/17/2019


George Galvin     
Last logged on: 7/17/2019


Arya Yang    
Last logged on: 7/17/2019


Karl Erik Bjørnhaug    
Last logged on: 7/17/2019


other ibo platforms
Kjell Sherman   My Press Releases

Deepfake Detectors Are on the Way

Published on 6/13/2019
For additional information  Click Here

In history's most famous painting, millions upon millions of viewers have wondered what Mona Lisa had on her mind when she flashed the world's most famous kinda-smile.

Well, wonder no more.

Now you can ask her.

Computer Generated Imagery (CGI) has become a fixture in movies ever since Alfred Hitchcock deployed it in Vertigo back in 1958.

It's evolved with technology to the point that, as sang by the Kinks' Ray Davies, Celluloid Heroes literally never really die.

That's Peter Cushing in Star Wars: Rogue One, reprising his villainous role from Star Wars: A New Hope 20 years prior.

By the time Cushing appeared in this one, though, he'd been laid to rest for 22 years. Bringing him back to CGI life in 2016 was an understandably complex process:

And the new Mona Lisa?

Not so much.

Here's a segment from Supasorn Suwajanakorn's TED talk in 2018, showing both how much simpler the process of replicating a human image has become and a beneficial reason for doing it:

Speaking of Arnold Schwarzenegger, here's a clip from a video featuring comedian Bill Hader impersonating him that went viral because it's a deepfake, ie- a technique for human image synthesis based on artificial intelligence.

Roughly 10 seconds into it, watch Hader's face subtly transform into Schwarzenegger's:

This is all well and good in the field of entertainment, but the implications here have become ever more daunting, especially as deepfake creations are becoming simpler to produce.

Videos serving as raw material are fertile ground, and two in particular have raised deepfake awareness among the general public:

The conundrum Facebook, Instagram, and other platforms face is, where do you draw the line between satire and subversion?

A key development in making deepfakes an everyman's tool is Suwajanakorn's announcement that, if someone has the easily obtained software, all that's needed to create them is a subject's collection of photos.

This means someone you trust as an authority figure could have his or her image manipulated for deceptive purposes. Imagine grabbing footage and/or photos of a well-known spokesman like, say, TV weatherman-turned-online pitch dude Todd Gross ...

... and created an unauthorized video using his trustworthiness to sell snake oil.

Or, consider the fallout if your footage and/or photos were put on a body in a revenge porn video or a bank surveillance video showing a robbery in progress?

It's being done to some already, so this is not a hypothetical situation at all.

Clearly, as harm to citizens is a real possibility, governments are now taking up the issue. However, as technology usually moves more quickly than legislators, odds are a viable solution to the issue won't be happening anytime soon.

So, it's gonna be up to individuals to determine what's real, and software is emerging to assist.

There are three types of deepfakes:

  • face swaps,
  • lip syncs, and
  • puppet-master fakes

All rely on rewriting how a victim's entire face or mouth moves.

Free software that's level higher than human scrutiny for deepfake detection includes SurfSafe, which is available as a plugin for the Chrome browser.

It works by hovering over any image that appears in the browser and instantly checks it against more than 100 trusted news and fact-checking sites -- such as Snopes -- to see whether it’s appeared there before.

It also saves a signature of every photo its users see while they’re browsing the internet with the plugin installed.

It's important to note that SurfSafe and similar software such as Reality Defender base their effectiveness on hashing, a mathematical process that turns images and videos into unique strings of numbers. They create their indexes of real and doctored images, because searching and storing strings of numbers is much quicker than using full-sized images.

Thus, they're far from perfect. If a deepfake substantially changes its images, the software algorithms may not realize their correlation to the originals.

Still, they've established a user-friendly beachhead in the battle to determine who's putting what into your information gathering. They'll evolve to keep up with Artificial Intelligence (AI).

There's only one caveat:

Actually getting people to use them.

Member Note: To comment on this PR, simply click reply on the owners main post below.
-  Copyright 2016 IBOsocial  -            Part of the IBOtoolbox family of sites.