Lili ZhangTianpeng PanJiahui LiuLin Han
Hyperspectral images play an important role in the field of remote sensing, and similar to ordinary RGB-based images, reconstructing the original information with higher quality using fewer bits is an essential task. Most existing hyperspectral image compression methods utilize a transform-based compression framework that reconstruct the original image after converting the input into a latent representation with a specific size. This kind of approaches have achieved some success, however, they suffer from two problems. First, the encoder and decoder used for transformation take up a huge amount of computational resources, both for training and deployment. Second, the upper performance limit is not satisfactory, that is to say, huge computational cost does not bring a matching performance gain. Based on this, we propose a novel hyperspectral image compression method. Specifically, we employ neural radiance fields (NeRF) to compress hyperspectral images, and unlike transform-based methods, the proposed method encodes the hyperspectral coordinate information, which is fitted to hyperspectral pixel values using multilayer perceptrons (MLPs). After that, we only need to compress the weights of the generated MLPs using the model compression method (in this paper, we only employ the weight quantization) to efficiently save the hyperspectral images, since the MLPs model, working as the fitting function, can be regarded as the compressed representation of the hyperspectral image. At the same condition, the proposed method achieve nearly 5 dB higher in PSNR and 6 dB higher in MS-SSIM than the comparison deep-learning based methods.
Yazhou FengXiaoming DingWanting DaiLezhou FengChuanwang Zhang
Douglas A. SantosCesar Albenes ZeferinoEduardo Augusto BezerraLuigi DililloDouglas R. Melo
Z. Hampel-AriasScout JarmanTory Carr