JOURNAL ARTICLE

A Hybrid RRAM-SRAM Computing-In-Memory Architecture for Deep Neural Network Inference-Training Edge Acceleration

Abstract

This paper presents a hybrid computing-in-memory architecture for inference and training stages of a two-layer deep neural network, with 96 Kb RRAM and 4Kb 7T SRAM. Combining merits of RRAM and SRAM, the hybrid architecture provides fast weight-updating for training, while achieves 997x lower standby power consumption and 1.35x higher area efficiency than SRAM-only scheme. A classification accuracy of 91% is obtained for resized MNIST task.

Keywords:
MNIST database Static random-access memory Resistive random-access memory Computer science Inference In-Memory Processing Artificial neural network Enhanced Data Rates for GSM Evolution Task (project management) Edge device Edge computing Deep learning Acceleration Computer architecture Artificial intelligence Computer engineering Engineering Computer hardware Electrical engineering Operating system Voltage Search engine

Metrics

1
Cited By
0.09
FWCI (Field Weighted Citation Impact)
0
Refs
0.43
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Ferroelectric and Negative Capacitance Devices
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Machine Learning and ELM
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.