DOI: 10.17586/1023-5086-2022-89-08-24-32
УДК: 159.9
Deepfake as the basis for digitally collaging “impossible faces”
Full text «Opticheskii Zhurnal»
Full text on elibrary.ru
Publication in Journal of Optical Technology
Барабанщиков В.А., Маринова М.М. Deepfake как основа цифрового коллажирования «невозможного лица» // Оптический журнал. 2022. Т. 89. № 8. С. 24–32. http://doi.org/10.17586/1023-5086-2022-89-08-24-32
Barabanshchikov V.A., Marinova M.M. Deepfake as the basis for digitally collaging “impossible faces” [in Russian] // Opticheskii Zhurnal. 2022. V. 89. № 8. P. 24–32. http://doi.org/10.17586/1023-5086-2022-89-08-24-32
V. A. Barabanshchikov and M. M. Marinova, "Deepfake as the basis for digitally collaging “impossible faces”," Journal of Optical Technology. 89(8), 448-453 (2022). https://doi.org/10.1364/JOT.89.000448
Subject of study. In this study, we present a novel method for the synthesis of video images using deepfake face swap, a technology that enables the creation of authentic video clips with false or swapped faces without evident traces of manipulation. The creation process of video images of “impossible faces,” such as a chimeric face, whose left and right sides belong to different people, or Thatcherized faces, wherein the eyes and mouth regions are rotated by 180°, is described stepwise using the DeepFaceLab software as an example. Aim of study. This study aimed to present and validate deepfake technology as a method of digitally collaging “impossible face” images. Method. Examples of method implementation were demonstrated using experimental results related to the perception patterns of moving “impossible faces” and their differences. Main results. The phenomena of perception previously recorded under static conditions were reproduced for dynamic models; accordingly, new interpretations can be inferred. Original faces were evaluated positively under static and dynamic conditions, independent of image inversion. Images of “impossible faces” under all conditions were perceived as unattractive, disharmonious, bizarre, and artificial. Expositions with audio speech increased the adequacy of evaluations under conditions of straight orientation. Practical significance. Digital collaging methods can significantly expand the capabilities of researchers in the field of interpersonal perception. Moreover, information technologies can expedite the creation of stimuli models for “impossible face” images, which are necessary for the further investigation of the human psyche in the course of communication.
Deepfake, machine learning, interpersonal perception, video image of face, impossible face, virtual sitter, dynamics and statics of stimuli model, chimeric face, tatcherized face
Acknowledgements:The research was carried out within the state assignment of Ministry of Science and Higher Education of RF No. 730000Ф.99.1.БВ09АА00006.
OCIS codes: 100.2000, 100.3008, 100.6890, 150.0155
References:1. V. A. Barabanshchikov, M. M. Marinova, and A. D. Abramov, “Virtual personality of a moving Thatcherized face,” Psikhol. Nauka Obraz. 26(1), 5–18 (2021).
2. L. Meitner and V. V. Selivanov, “Critical analysis of the use of virtual technologies in clinical psychology in Europe (based on the content of the journal Cyberpsychology, Behavior, and Social Networking),” Sovrem. Zarub. Psikhol. 10(2), 36–43 (2021).
3. V. A. Barabanschikov, M. M. Marinova, and A. D. Abramov, “The virtual personality of the Thatcherized face in statics and dynamics,” in Neurotechnologies, Y. Shelepin, S. Alekseenko, and N. Nan Chu, eds. (Izdatel’stvo VVM, St. Petersburg, 2021), pp. 37–49.
4. V. V. Selivanov, “Mental states in a high-level VR-environment,” in Child in the Digital World: The International Psychological Forum (2021), p. 116.
5. V. A. Barabanschikov and M. M. Marinova, “Perception of video images of a chimeric face,” Poznanie i Perezhivanie 1(1), 112–134 (2020).
6. V. A. Barabanschikov and M. M. Marinova, “Deepfake in face perception research,” Eksp. Psikhol. 14(1), 4–18 (2021).
7. V. A. Barabanschikov and O. A. Korol’kova, “Perception of ‘live’ facial expressions,” Eksp. Psikhol. 13(3), 55–73 (2020).
8. V. A. Barabanschikov and A. V. Zhegallo, “Oculomotor activity in perception of dynamic and static facial expressions of the face,” Eksp. Psikhol. 11(1), 5–34 (2018).
9. I. Perov, D. Gao, N. Chervoniy, K. Liu, S. Marangonda, C. Umé, Mr. Dpfks, C. F. Facenheim, L. RP, J. Jiang, S. Zhang, P. Wu, B. Zhou, and W. Zhang, “DeepFaceLab: a simple, flexible and extensible face swapping framework,” arXiv: 2005.05535v4 (2020).
10. S. Anwar and N. Barnes, “Real image denoising with feature attention,” in IEEE International Conference on Computer Vision (2019), pp. 3155–3164.
11. U. M. Bahar and S. Afsana, “Deep insights of deepfake technology: a review,” arXiv:2105.00192 (2021).
12. R. Chawla, “Deepfakes: how a pervert shook the world,” Int. J. Adv. Res. Dev. 4(6), 4–8 (2019).
13. B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, and C. C. Ferrer, “The DeepFake detection challenge (DFDC) dataset,” arXiv:2006.07397 (2020).
14. Z. Feng, Z. Li, A. Cai, L. Li, B. Yan, and L. Tong, “A preliminary study on projection denoising for low dose CT imaging using modified dual domain U net,” in 3rd International Conference on Artificial Intelligence and Big Data (2020).
15. D. Güera and E. J. Delp, “Deepfake video detection using recurrent neural networks,” in 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (2018).
16. D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv:1412.6980v9 (2017).
17. X. J. Mao, C. Shen, and Y. B. Yang, “Image restoration using very deep convolutional encoder decoder networks with symmetric skip connections,” in Advances in Neural Information Processing Systems (2016), pp. 2802–2810.
18. M. Maras and A. Alexandrou, “Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos,” Int. J. Evidence Proof 23, 255–262 (2018).
19. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).