JOURNAL ARTICLE

RadiX-Net: Structured Sparse Matrices for Deep Neural Networks

Abstract

The sizes of deep neural networks (DNNs) are rapidly outgrowing the capacity\nof hardware to store and train them. Research over the past few decades has\nexplored the prospect of sparsifying DNNs before, during, and after training by\npruning edges from the underlying topology. The resulting neural network is\nknown as a sparse neural network. More recent work has demonstrated the\nremarkable result that certain sparse DNNs can train to the same precision as\ndense DNNs at lower runtime and storage cost. An intriguing class of these\nsparse DNNs is the X-Nets, which are initialized and trained upon a sparse\ntopology with neither reference to a parent dense DNN nor subsequent pruning.\nWe present an algorithm that deterministically generates RadiX-Nets: sparse DNN\ntopologies that, as a whole, are much more diverse than X-Net topologies, while\npreserving X-Nets' desired characteristics. We further present a\nfunctional-analytic conjecture based on the longstanding observation that\nsparse neural network topologies can attain the same expressive power as dense\ncounterparts\n

Keywords:

Metrics

20
Cited By
1.28
FWCI (Field Weighted Citation Impact)
17
Refs
0.83
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.