Hardware-based machine learning is becoming increasingly popular due to its high speed of computation. One of the desired characteristics of such hardware is reduced hardware and design costs. This paper proposes a design approach for a neural network to reduce the cost of hardware in terms of adders and multipliers. Adders and multipliers are parts of the main components in the neural network, and they are used in each node in the network. The proposed approach reduces the number of multipliers and adders in the network by half, which reduces the cost. The proposed technique is based on sharing multiplier and adder between two hidden layers. The method has been tested and validated using multiple datasets. The accuracy of the proposed approach is similar to the traditional methods in the literature, while the proposed approach utilizes only half the number of multipliers and adders. The proposed design is implemented using VHDL and Altera Arria 10 GX FPGA. The simulation result shows the proposed method retains the performance of the network with a 63% reduction in the hardware design with acceptable accuracy.
Yuling LuoLei WanJunxiu LiuJim HarkinYi Cao
Lai WeiDongsheng LiuJiahao LuLingsong ZhuXuan Cheng
Thanapol ThongkhamYutana Jewajinda