JOURNAL ARTICLE

Quantized Lite Convolutional Neural Network Hardware Accelerator Design with FPGA for Face Direction Recognition

Abstract

In this study, a quantized lite convolutional neural network (CNN) is applied to accelerate the computations of face direction recognition with Xilinx ZedBoard FPGA platform. The utilized 8-layer lite CNN includes three convolution layers, one max-pooling layer, two average-pooling layers, two fully connected layers. Firstly, the weighting parameters and deviation values of each layer in the CNN are extracted by the training process with software, and the then quantization precision by 8-bit integer (INT8) is used as the inference calculations. By the quantized lite CNN model through the hardware acceleration, the facial direction is correctly detected to achieve fast recognition. Comparison with the software-based implementation in personal computer, the speed-up ratio by the FPGA-based acceleration is about 1.44 times with 0.289 seconds inference time.

Keywords:
Field-programmable gate array Computer science Convolutional neural network Hardware acceleration Quantization (signal processing) Convolution (computer science) Artificial intelligence Pooling Software Speedup Computation Weighting Artificial neural network Facial recognition system Pattern recognition (psychology) Computer hardware Parallel computing Computer vision Algorithm

Metrics

3
Cited By
0.21
FWCI (Field Weighted Citation Impact)
3
Refs
0.51
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Face and Expression Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image and Video Stabilization
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image Processing Techniques and Applications
Physical Sciences →  Engineering →  Media Technology
© 2026 ScienceGate Book Chapters — All rights reserved.