JOURNAL ARTICLE

Accelerating low bit-width convolutional neural networks with embedded FPGA

Abstract

Convolutional Neural Networks (CNNs) can achieve high classification accuracy while they require complex computation. Binarized Neural Networks (BNNs) with binarized weights and activations can simplify computation but suffer from obvious accuracy loss. In this paper, low bit-width CNNs, BNNs and standard CNNs are compared to show that low bit-width CNNs is better suited for embedded systems. An architecture based on the two-stage arithmetic unit (TSAU) as the basic processing element is proposed to process each layer iteratively for low bit-width CNN accelerators. Then the DoReFa-Net which is trained with weights and activations represented in 1 bit and 2 bits respectively is implemented on Zynq XC7Z020 FPGA with a 410.2 GOPS performance. The accelerator can meet the real-time requirement of embedded applications with a 106 FPS throughput and a 73.1% top-5 accuracy on the ImageNet dataset. The accelerator outperforms existing FPGA-based CNN accelerators in the tradeoff among accuracy, energy and resource efficiency.

Keywords:
Convolutional neural network Computer science Field-programmable gate array Computation Throughput Artificial neural network Hardware acceleration Computer hardware Speedup Artificial intelligence Computer engineering Parallel computing Pattern recognition (psychology) Algorithm

Metrics

71
Cited By
5.08
FWCI (Field Weighted Citation Impact)
21
Refs
0.96
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
CCD and CMOS Imaging Sensors
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Brain Tumor Detection and Classification
Life Sciences →  Neuroscience →  Neurology
© 2026 ScienceGate Book Chapters — All rights reserved.