Recent advances in deep learning, increased availability of large-scale datasets, and improvement of accelerated graphics processing units facilitated creation of an unprecedented amount of synthetically generated media content with impressive visual quality. Although such technology is used predominantly for entertainment, there is widespread practice of using deepfake technology for malevolent ends. This potential for malicious use necessitates the creation of detection methods capable of reliably distinguishing manipulated video content. In this work we aim to create a learning-based detection method for synthetically generated videos. To this end, we attempt to detect spatiotemporal inconsistencies by leveraging a learning-based magnification-inspired feature manipulation unit. Although there is existing literature on the use of motion magnification as a preprocessing step for deepfake detection, in our work, we aim to utilize learning-based magnification elements to develop an end-to-end deepfake detection model. In this research, we investigate different variations of feature manipulation networks, both with spatially constant and spatially varying amplification. To clarify, although the proposed model draws from existing literature on motion magnification, we do not perform motion magnification in our experiments but instead use the underlying architecture of such networks for feature enhancement. Our objective with this work is to take a step towards applying learnable motion manipulation in improving the target accuracy of a task at hand.
Deep learning, deepfake detection, motion magnification, computer vision, video classification, spatiotemporal analysis
MIRZAYEV, Aydamir and DİBEKLİOĞLU, Hamdi
"Motion magnification-inspired feature manipulation for deepfake detection,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 32:
1, Article 10.
Available at: https://journals.tubitak.gov.tr/elektrik/vol32/iss1/10