JOURNAL ARTICLE

Regularize implicit neural representation by itself

Abstract

This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR). The INR is a fully connected network that can represent signals with details not restricted by grid resolution. However, its generalization ability could be improved, especially with nonuniformly sampled data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix. The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR. INRR improves the generalization of INR in signal representation by perfectly integrating the signal's self-similarity with the smoothness of the Laplacian matrix. Through well-designed numerical experiments, the paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity. Moreover, the proposed method could improve the performance of other signal representation methods.

Keywords:
Generalization Computer science Representation (politics) Similarity (geometry) Convergence (economics) Artificial neural network Smoothness Matrix (chemical analysis) Algorithm Laplace operator Artificial intelligence Pattern recognition (psychology) Mathematics Image (mathematics)

Metrics

8
Cited By
1.46
FWCI (Field Weighted Citation Impact)
52
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Image and Signal Denoising Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Sparse and Compressive Sensing Techniques
Physical Sciences →  Engineering →  Computational Mechanics
© 2026 ScienceGate Book Chapters — All rights reserved.