Abstract

Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks. Existing multi-modal FAS methods rely on stacked vanilla convolutions, which is weak in describing detailed intrinsic information from modalities and easily being ineffective when the domain shifts (e.g., cross attack and cross ethnicity). In this paper, we extend the central difference convolutional networks (CDCN) [39] to a multimodal version, intending to capture intrinsic spoofing patterns among three modalities (RGB, depth and infrared). Meanwhile, we also give an elaborate study about singlemodal based CDCN. Our approach won the first place in "Track Multi-Modal" as well as the second place in "Track Single-Modal (RGB)" of ChaLearn Face Antispoofing Attack Detection Challenge@CVPR2020 [20]. Our final submission obtains 1.02±0.59% and 4.84±1.79% ACER in "Track Multi-Modal" and "Track Single-Modal (RGB)", respectively. The codes are available at https://github.com/ZitongYu/CDCN.

Keywords:
Modal Computer science Spoofing attack Face (sociological concept) RGB color model Modalities Artificial intelligence Track (disk drive) Convolutional neural network Facial recognition system Domain (mathematical analysis) Computer vision Pattern recognition (psychology) Computer security Mathematics

Metrics

81
Cited By
7.68
FWCI (Field Weighted Citation Impact)
49
Refs
0.98
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Biometric Identification and Security
Physical Sciences →  Computer Science →  Signal Processing
Face recognition and analysis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
User Authentication and Security Systems
Physical Sciences →  Computer Science →  Information Systems

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.