Artificial intelligence and especially Machine Learning recently gained a lot of interest from the industry. Indeed, new generation of neural networks built with a large number of successive computing layers enables a large amount of new applications and services implemented from smart sensors to data centers. These Deep Neural Networks (DNN) can interpret signals to recognize objects or situations to drive decision processes. However, their integration into embedded systems remains challenging due to their high computing needs. This paper presents PNeuro, a scalable energy-efficient hardware accelerator for the inference phase of DNN processing chains. Simple programmable processing elements architectured in SIMD clusters perform all the operations needed by DNN (convolutions, pooling, non-linear functions, etc.). An FDSOI 28 nm prototype shows an energy efficiency of 700 GMACS/s/W at 800 MHz. These results open important perspectives regarding the development of smart energy-efficient solutions based on Deep Neural Networks.
Sungju RyuHyungjun KimWooseok YiEunhwan KimYulhwa KimTaesu KimJae‐Joon Kim
Viktor MelnykAnatoliy MelnykMohammad Rahma
Anaam AnsariKiran GunnamTokunbo Ogunfunmi
Chao FangShouliang GuoWei WuJun LinZhongfeng WangMing Kai HsuLingzhi Liu
Vinayak GokhaleAliasger ZaidyAndre Xian Ming ChangEugenio Culurciello