JOURNAL ARTICLE

RV-GEMM: Neural Network Inference Acceleration with Near-Memory GEMM Instructions on RISC-V

Abstract

General Matrix Multiply (GEMM), as a fundamental operation in neural network, plays an important role in artificial intelligence and signal processing applications. In this paper, we proposed three SMID RISC-V custom instructions to accelerate GEMM computations, supporting multiple precisions including 32-bit, 16-bit and 8-bit fixed. Furthermore, we implemented address calculation and loop control units along with the GEMM acceleration module to reduce the memory access overhead. These three GEMM custom instructions, along with the near-memory optimization units, were incorporated in the RV-GEMM processor and implemented on the FPGA platform for speedup evaluation. It was also compiled in Synopsys Design Compiler with CMOS 55nm process for hardware overhead estimation. Compared to the baseline RISC-V processor, for GEMM computations under precisions of 32-bit, 16-bit and 8-bit fixed, the RV-GEMM processor achieved speedup ratios of 15.8×, 28.7× and 42.5×. The peak energy efficiency also reached 260 GOPS/W, 420 GOPS/W and 609 GOPS/W, respectively.

Keywords:
Computer science Parallel computing Artificial neural network Acceleration Inference Computer architecture Programming language Artificial intelligence

Metrics

3
Cited By
1.59
FWCI (Field Weighted Citation Impact)
6
Refs
0.74
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Parallel Computing and Optimization Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture
© 2026 ScienceGate Book Chapters — All rights reserved.