JOURNAL ARTICLE

Neural network-based accelerators for transcendental function approximation

Abstract

The general-purpose approximate nature of neural network (NN) based accelerators has the potential to sustain the historic energy and performance improvements of computing systems. We propose the use of NN-based accelerators to approximate mathematical functions in the GNU C Library (glibc) that commonly occur in application benchmarks. Using our NN-based approach to approximate cos, exp, log, pow, and sin we achieve an average energy-delay product (EDP) that is 68x lower than that of traditional glibc execution. In applications, our NN-based approach has an EDP 78% of that of traditional execution at the cost of an average mean squared error (MSE) of 1.56.

Keywords:
Artificial neural network Transcendental function Computer science Function approximation Mean squared error Function (biology) Product (mathematics) Approximation error Energy (signal processing) Algorithm Applied mathematics Artificial intelligence Computational science Mathematics Statistics

Metrics

24
Cited By
2.41
FWCI (Field Weighted Citation Impact)
24
Refs
0.89
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Parallel Computing and Optimization Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture
Numerical Methods and Algorithms
Physical Sciences →  Computer Science →  Computational Theory and Mathematics
© 2026 ScienceGate Book Chapters — All rights reserved.