A growing unease has developed around deepfake technologies, the “21st century’s response to photoshopping”, that make it possible to create evidence of scenes that do not exist in reality. For instance, celebrities (eg.Gal Gadot, Taylor Swift) can be witnessed as stars of pornography and politicians can be seen speaking words they never uttered (eg. MIT researchers’ released a video of former US President Richard Nixon delivering the alternate speech he had prepared had the Apollo 11 failed). The two main problems about them are (a) they aren’t easily identifiable by experts and (b) they create doubt about the content of real videos. Though there have been countermeasures like Facebook and Twitter banning them from their networks, their extend of effectiveness is questionable.
Deepfakes are AI generated synthesized stitches of individuals into photos or videos they did not participate in, made from academic and industrial researches to amateur enthusiasts, visual effects studios or porn producers. Audio can also be deepfaked to create “voice skins” or “voice clones'' of public figures. In March 2019, a German energy firm paid around 200,000 GBP into a Hungarian bank account after being phoned by a fraudster who recreated the German CEO’s voice. Machine learning has made it possible to create deepfakes at a lower cost through algorithms. A creator can train a neural network of real video footage of the target person, to give it a realistic understanding of what he/she looks like from different angles and lighting. In combination with graphics techniques, a copy of the person onto a different actor is created. Generative adversarial networks are said to be the main engine of deepfakes in the future.
Spotting a deepfake is challenging because their production process constantly improves. For example, in 2018 it was observed that the majority of images did not blink normally, so they appeared with blinking. As soon as the weakness was unveiled, it was fixed. Poorer quality ones, however, are easier to spot. Governments, universities and tech firms have all been funding research to detect deepfakes. The Deepfake Detection Challenge kicked off recently, backed by Microsoft, Facebook and Amazon. Ideally, a deepfake verification tool should be available to all.
Some concerns regarding the stakeholders of the issue…
- The clearest threat deepfakes currently pose is to women regarding nonconsensual pornography (96% of deepfakes deployed on the internet) or revenge porn
- Corporations worry deepfakes could contribute to supercharding scams
- Governments fear that deepfakes pose a threat to democracy (eg. The 2018 deepfake video on Ali Bongo sparked a coup) , serve as propaganda (eg. Satellite images of troops massing on a boarder) and undermine trust
Deepfakes are here to stay. Thus, people should become more critical of what they encounter online.