Mikolaj CzerkawskiJavier CardonaRobert AtkinsonCraig MichieIvan AndonovićCarmine ClementeChristos Tachtatzis
Coordinate-based Multilayer Perceptron (MLP) networks, despite being capable\nof learning neural implicit representations, are not performant for internal\nimage synthesis applications. Convolutional Neural Networks (CNNs) are\ntypically used instead for a variety of internal generative tasks, at the cost\nof a larger model. We propose Neural Knitwork, an architecture for neural\nimplicit representation learning of natural images that achieves image\nsynthesis by optimizing the distribution of image patches in an adversarial\nmanner and by enforcing consistency between the patch predictions. To the best\nof our knowledge, this is the first implementation of a coordinate-based MLP\ntailored for synthesis tasks such as image inpainting, super-resolution, and\ndenoising. We demonstrate the utility of the proposed technique by training on\nthese three tasks. The results show that modeling natural images using patches,\nrather than pixels, produces results of higher fidelity. The resulting model\nrequires 80% fewer parameters than alternative CNN-based solutions while\nachieving comparable performance and training time.\n
Arya AftabAlireza MorsaliShahrokh Ghaemmaghami
Kraus, Michael A.Tatsis, Konstantinos E.
Hao ZhuShaowen XieZhen LiuFengyi LiuQi ZhangYou ZhouYi LinZhan MaXun Cao
Vishwanath SaragadamJasper TanGuha BalakrishnanRichard G. BaraniukAshok Veeraraghavan