JOURNAL ARTICLE

The Storage Structure of Convolutional Neural Network Reconfigurable Accelerator Based on ASIC

Abstract

With the development of deep convolutional neural networks (CNNs), it can be achieved higher accuracy in many aspects, including computer vision, speech and natural language processing.Performance efficiency of CNN at the hardware level requires overcoming the large calculation-related problems, so memory bandwidth and power budgets, should be in economical limits.CNNs models also adopts different kernel sizes, depends on the application nature, therefore it is important for designed architecture to be reconfigurable.In this work, we propose a new highperformance multi-precision reconfigurable architecture (MPRA) and optimize it for recent CNNs using 3×3/5×5/7×7 convolution such as AlexNet, GoogLeNet and ResNet with 16-bit fixed and 8-bit fixed precision.The architecture synthesized on 65 nm CMOS technologies achieves average performance (GOPS) of 276.5 in 16bit×16bit and 1105.9 in 8bit×8bit mode, running at 640 MHz and 1 V with a power dissipation of 599 mW respectively.Compared to state-of-the-art designs, the proposed architecture achieves 2.36× energy efficiency, 2.4× to 6.8× area efficiency, and 16.3% to 27.4% higher computational efficiency for AlexNet benchmarked reference.

Keywords:
Application-specific integrated circuit Computer science Convolutional neural network Computer architecture Embedded system Computer hardware Artificial intelligence

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
9
Refs
0.09
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

CCD and CMOS Imaging Sensors
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Fault Detection and Control Systems
Physical Sciences →  Engineering →  Control and Systems Engineering

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.