Yu-Hsin KaoChia-Chun LiuYiwen DingTsung‐Chu Huang
Recent advances in artificial intelligence mainly focus on acceleration and reliability in safety-critical applications. Winograd Convolution has been proved to possess both potentialities on acceleration and fault tolerance. However the state-of-the-art ternary modular redundancy approach suffers triple computation overhead and unsuitable for most CNN with smaller kernels. In this paper, AN-code self-checking technique is applied for helping the proposed double modular redundancy technique. From evaluation and experimental results, for an F(32x32,3x3) convolution, 75% of multiplications and 42dB of errors and overflows can be reduced. Compared with the stateof-the-art fault-tolerant Winograd convolution, 33% of extra multiplication count overhead can be saved. As for the fault tolerance, the block error rate can be also reduced by A/2 folds.
Minsik KimCheonjun ParkSung-Jun KimTaeyoung HongWon Woo Ro
Syed Asad AlamAndrew AndersonBarbara BarabaszDavid Gregg
Che LinHeng HsuTsung‐Chu Huang
Zhijian LinMeng ZhangDongpeng WengFei Liu
Zelong WangQiang LanHongjun HeChunyuan Zhang