Hyperspectral images, which record the electro-magnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a similarly-sized RBG color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multi-layer perceptron network f Θ with sinusoidal activation functions "learns" to map pixel locations to pixel intensities for a given hyperspectral image I. f Θ thus acts as a compressed encoding of this image, and the original image is reconstructed by evaluating f ϑ at each pixel location. We have evaluated our method on four benchmarks-Indian Pines, Jasper Ridge, Pavia University, and Cuprite-and we show that the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low bitrates.
Shima RezasoltaniFaisal Z. Qureshi
Yannick StrümplerJanis PostelsRen YangLuc Van GoolFederico Tombari
Bohdan PeredereiFaisal Z. Qureshi
Ho Man KwanFan ZhangAndrew GowerDavid Bull
Faisal Z. QureshiShima Rezasoltani