Entertainment

Face­book Bans Deep­fakes

 -  -  326


MISHAWAKA – Face­book’s re­cent ban of deep­fakes, a type of video edit­ing which trans­forms the sub­ject into some­thing un­rec­og­niz­able, is a step to­wards com­bat­ting mis­in­for­ma­tion for the up­com­ing 2020 elec­tion. How­ever, pol­i­cy­mak­ers are con­cerned the ban does­n’t cover all of the nec­es­sary bases. 

Deep­fakes, a port­man­teau of “deep learn­ing” and “fakes,” are the heav­ily edited videos that can seem­ingly trans­form what a per­son is say­ing and do­ing. In May of 2019, a video of the Speaker of the House of Rep­re­sen­ta­tives, Nancy Pelosi, was be­ing cir­cu­lated on­line; in the video, Pelosi ap­peared to be ine­bri­ated on­stage while an­swer­ing a ques­tion. Upon later com­par­i­son, it was re­vealed the video had been al­tered from the orig­i­nal; the speed and pitch of her voice had been changed. 

Such an ex­am­ple is now re­ferred to as a “cheap­fake,” con­trast­ing it with the much more sub­tle deep­fake. What makes the deep­fakes so dif­fi­cult to de­tect is the com­plex­ity of the process in cre­at­ing them, namely, the use of ar­ti­fi­cial in­tel­li­gence soft­ware. This is where the “deep learn­ing” of the name ap­pears, the learn­ing that such a soft­ware must do to work so well. All that the as­pir­ing ed­i­tors must do in or­der to use some of these pro­grams is se­lect a video, choose any pho­tos they wish, and in­sert them into the soft­ware, which will then su­per­im­pose the pho­tos on the video in an al­most in­dis­cernible man­ner. 

Nat­u­rally, this mis­in­for­ma­tion be­ing widely cir­cu­lated could cause dis­rup­tion. The po­ten­tial for sin­is­ter in­di­vid­u­als to ex­ploit such strate­gies and mis­rep­re­sent oth­ers in the fu­ture prompted many to ex­am­ine so­cial me­dia out­lets’ meth­ods of dis­cern­ing fact from forgery. That scrutiny has led to Face­book’s re­cent ban­ning of the deep­fakes. In a blog post from Monika Bick­ert, Face­book’s Vice Pres­i­dent of Global Pol­icy Man­age­ment, they clar­i­fied the re­quire­ments for tak­ing down such con­tent. 

“Go­ing for­ward, we will re­move mis­lead­ing ma­nip­u­lated me­dia if it meets the fol­low­ing cri­te­ria:  

  • It has been edited or syn­the­sized – be­yond ad­just­ments for clar­ity or qual­ity – in ways that aren’t ap­par­ent to an av­er­age per­son and would likely mis­lead some­one into think­ing that a sub­ject of the video said words that they did not ac­tu­ally say. And: 
  • It is the prod­uct of ar­ti­fi­cial in­tel­li­gence or ma­chine learn­ing that merges, re­places, or su­per­im­poses con­tent onto a video, mak­ing it ap­pear to be au­then­tic.” 

As far as the ef­fi­cacy of the ban, Bethel’s As­sis­tant Pro­fes­sor of Re­li­gion and Phi­los­o­phy and Pro­fes­sor of Prime­time, Keith Koteskey, weighs in. 

“I, prob­a­bly at this point would say, ten­ta­tively, I would feel com­fort­able with where they’re at, even with the ac­knowl­edg­ment that there will be some ma­te­r­ial that is posted that is in­ac­cu­rate or false,” Koteskey said. “If it seems out­landish, then that ought to at least give us pause, if it’s a case of some­thing that just seems un­be­liev­able.” 

When on Face­book, it’s en­cour­aged to keep in mind that this ban does not cover ma­te­r­ial that is satir­i­cal in na­ture or ob­vi­ous par­ody. Fur­ther­more, the peo­ple work­ing to take down the mis­in­for­ma­tion aren’t per­fect and not every­thing that is false will dis­ap­pear. This is a process; a dis­cern­ing eye for all users is a ne­ces­sity to­day. 

Quote cour­tesy of https://​about.fb.com/​news/​2020/​01/​en­forc­ing-against-ma­nip­u­lated-me­dia/ 

Other in­for­ma­tion cour­tesy of The Wash­ing­ton Post: 

https://​www.wash­ing­ton­post.com/​tech­nol­ogy/​2020/​01/​06/​face­book-ban-deep­fakes-sources-say-new-pol­icy-may-not-cover-con­tro­ver­sial-pelosi-video/

https://​www.wash­ing­ton­post.com/​opin­ions/​face­book-banned-deep­fakes-but-theres-a-more-men­ac­ing-type-of-video-out-there/​2020/​01/​09/​49e­b5a68-325d-11ea-a053-dc6d944ba776_s­tory.html

https://​www.wash­ing­ton­post.com/​opin­ions/​a-rea­son-to-de­spair-about-the-dig­i­tal-fu­ture-deep­fakes/​2019/​01/​06/​7c5e82ea-0ed2-11e9-831f-3aa2c2be4cb­d_s­tory.html