JOURNAL ARTICLE

DeepFake Video Detection using Vision Transformer

Shereen HussienSeif Mohamed

Year: 2024 Journal:   International journal of intelligent computing and information sciences/International Journal of Intelligent Computing and Information Sciences Vol: 0 (0)Pages: 0-0

Abstract

Technology is always a double-edged sword, and with the astonishing advancements in technology, it is expected that the DeepFake problem will become more common and serious. DeepFake has recently caused a lot of trouble because its flaws outweigh its advantages. Since DeepFake has such a significant influence on individuals deception, instability of principles and falsification of evidence. Instead of just affecting people, it led to multiple incidents that affected the image of entire nations. In this work, a model that has been built to mitigate the negative effects of deepFake and maintain an individual's reputation by detecting the alteration of people's photographs and videos. A model with integrated vision transformer architectures Deep-ViT and Cross-ViT is designed to process pre-extracted faces from FF++ dataset. The model distinguishes between the real and fake faces in two different perspectives, subclass detection on each manipulation method and overall detection of all types. The proposed model achieves an outstanding results and the highest accuracy in FaceSwap manipulation method with 98%.

Keywords:
Computer science Transformer Computer vision Artificial intelligence Engineering Electrical engineering

Metrics

5
Cited By
3.07
FWCI (Field Weighted Citation Impact)
43
Refs
0.87
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Image Processing Techniques and Applications
Physical Sciences →  Engineering →  Media Technology
Video Analysis and Summarization
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.