This paper presents a hybrid computing-in-memory architecture for inference and training stages of a two-layer deep neural network, with 96 Kb RRAM and 4Kb 7T SRAM. Combining merits of RRAM and SRAM, the hybrid architecture provides fast weight-updating for training, while achieves 997x lower standby power consumption and 1.35x higher area efficiency than SRAM-only scheme. A classification accuracy of 91% is obtained for resized MNIST task.
Yuguang ChenZhiwei LiuYing-Jing Tsai
Gokul KrishnanZhenyu WangInjune YeoLi YangJian MengMaximilian LiehrRajiv JoshiNathaniel C. CadyDeliang FanJae-sun SeoYu Cao
Seyed Hassan Hadi NematiNima EslamiMohammad Hossein Moaiyeri
Wooseok ChoiMyonghoon KwakSeyoung KimHyunsang Hwang
Weiye TangLanheng NieCailian MaHao WuYiyang YuanShuaidi ZhangQihao LiuFeng Zhang