Abstract

Hardware-based machine learning is becoming increasingly popular due to its high speed of computation. One of the desired characteristics of such hardware is reduced hardware and design costs. This paper proposes a design approach for a neural network to reduce the cost of hardware in terms of adders and multipliers. Adders and multipliers are parts of the main components in the neural network, and they are used in each node in the network. The proposed approach reduces the number of multipliers and adders in the network by half, which reduces the cost. The proposed technique is based on sharing multiplier and adder between two hidden layers. The method has been tested and validated using multiple datasets. The accuracy of the proposed approach is similar to the traditional methods in the literature, while the proposed approach utilizes only half the number of multipliers and adders. The proposed design is implemented using VHDL and Altera Arria 10 GX FPGA. The simulation result shows the proposed method retains the performance of the network with a 63% reduction in the hardware design with acceptable accuracy.

Keywords:
Computer science Architecture Computer architecture Artificial neural network Embedded system Computer hardware Artificial intelligence

Metrics

9
Cited By
0.88
FWCI (Field Weighted Citation Impact)
25
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
CCD and CMOS Imaging Sensors
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Industrial Vision Systems and Defect Detection
Physical Sciences →  Engineering →  Industrial and Manufacturing Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.