JOURNAL ARTICLE

Exploring Quantization and Mapping Synergy in Hardware-Aware Deep Neural Network Accelerators

Abstract

Energy efficiency and memory footprint of a convolutional neural network (CNN) implemented on a CNN inference accelerator depend on many factors, including a weight quantization strategy (i.e., data types and bit-widths) and mapping (i.e., placement and scheduling of DNN elementary operations on hardware units of the accelerator). We show that enabling rich mixed quantization schemes during the implementation can open a previously hidden space of mappings that utilize the hardware resources more effectively. CNNs utilizing quantized weights and activations and suitable mappings can significantly improve trade-offs among the accuracy, energy, and memory requirements compared to less carefully optimized CNN implementations. To find, analyze, and exploit these mappings, we: (i) extend a general-purpose state-of-the-art mapping tool (Timeloop) to support mixed quantization, which is not currently available; (ii) propose an efficient multi-objective optimization algorithm to find the most suitable bit-widths and mapping for each DNN layer executed on the accelerator; and (iii) conduct a detailed experimental evaluation to validate the proposed method. On two CNNs (MobileNetV1 and MobileNetV2) and two accelerators (Eyeriss and Simba) we show that for a given quality metric (such as the accuracy on ImageNet), energy savings are up to 37% without any accuracy drop.

Keywords:
Computer science Memory footprint Quantization (signal processing) Convolutional neural network Exploit Computer engineering Inference Hardware acceleration Artificial neural network Design space exploration Efficient energy use Computer hardware Artificial intelligence Computer architecture Algorithm Embedded system Field-programmable gate array Programming language

Metrics

2
Cited By
1.06
FWCI (Field Weighted Citation Impact)
21
Refs
0.65
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Adversarial Robustness in Machine Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Hardware-aware Quantization/Mapping Strategies for Compute-in-Memory Accelerators

Shanshi HuangHongwu JiangShimeng Yu

Journal:   ACM Transactions on Design Automation of Electronic Systems Year: 2022 Vol: 28 (3)Pages: 1-23
JOURNAL ARTICLE

Hardware neural network accelerators

Olivier Temam

Year: 2013 Pages: 1-1
JOURNAL ARTICLE

Hardware neural network accelerators

Olivier Temam

Journal:   International Conference on Hardware/Software Codesign and System Synthesis Year: 2013
© 2026 ScienceGate Book Chapters — All rights reserved.