This article is part 2 of a series reviewing selected papers from Altmetric’s list of the top 100 most-discussed scholarly works of 2019.
The rapidly rising impact of deepfake videos
Deepfake is a term for videos and presentations enhanced by artificial intelligence and other modern technology to present falsified results. One of the best examples of deepfakes involves the use of image processing to produce video of celebrities, politicians or others saying or doing things that they never actually said or did. [Techopedia]
A September 2019 Deeptrace report1 on the state of deepfakes has found that since its emergence in late 2017, the phenomenon of deepfakes has been developing very quickly, with rapidly growing societal impact and technological sophistication. At the time of report publication, there were 14,678 deepfake videos online, 96% of which had pornographic content.
While their use in a pornographic context continues to grow, deepfakes are also increasingly being used for the purpose of political disinformation. For example, the Deeptrace report documents the use of deepfake videos to accelerate political unrest in Gabon, and to trigger a political sex scandal in Malaysia. In a further example, a Belgian political party created a deepfake video of a Donald Trump speech, which it says was done to start a debate about climate change. Deepfake videos of speeches by Barack Obama and Facebook CEO Mark Zuckerberg have also made international headlines, but these videos were made with the intent of raising awareness of the problem rather than to mislead.
The alarm is also being increasingly loudly sounded in regard to the likely impact of deepfakes on businesses. An Axios article reports that deepfake audio has already been used to impersonate CEOs and steal millions from companies, and warns that businesses are seriously under-prepared, “leaving an opening for a well-timed deepfake to drop like a bomb.”
What does this mean for knowledge management?
Organisations have traditionally taken a one-sided utopian view of knowledge management (KM), based on the notion that management and employees will always want to use knowledge in positive and productive ways for the benefit of the organisation.
However, in a 2006 paper2, Steven Alter, Professor Emeritus at the University of San Fransisco, explores what he describes as the dark side of KM. He identifies three dark side KM goals – distortion, suppression, and misappropriation – and a range of dark side KM tactics that are used to achieve these goals. Unless effectively addressed, these dark side KM goals and tactics can work to undermine positive KM initiatives in organisations. I’ve already documented how this has happened in the case of beleaguered aircraft manufacturer Boeing.
Deepfake videos give the dark side an advantage by being a very effective tool for carrying out a number of the dark side KM tactics. But their potential impact goes beyond this. An article in MIT Technology Review contends that the biggest threat of deepfakes isn’t the deepfakes themselves, but that they will make people stop believing that real things are real. This means that people will potentially increasingly doubt, or even completely reject, the validity of organisational knowledge.
An example of the rapid technological advances in deepfake production
The #1 most discussed article3 in Altmetric’s top 100 list for 2019 presents an example of the rapid technological advances in deepfake production. Previously, a large dataset of images of a single person was needed to create a “talking head” deepfake video. However, as shown in the video above and summarised in the following abstract, it is now possible to create them from just a few images, potentially even just one image. This includes being able to create deepfakes from some pieces of artwork, for example the famous painting of the Mona Lisa as shown in the video above.
Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.
- Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The State of Deepfakes: Landscape, Threats, and Impact. Amsterdam: Deeptrace. ↩
- Alter, S. (2006, January). Goals and tactics on the dark side of knowledge management. In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06) (Vol. 7, pp. 144a-144a). IEEE. ↩
- Zakharov, E., Shysheya, A., Burkov, E., & Lempitsky, V. (2019). Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. arXiv preprint arXiv:1905.08233. ↩
Also published on Medium.