Xi YuTiejun LvWeicai LiWei NiDusit NiyatoEkram Hossain
Multi-task semantic communication can serve multiple learning tasks using a\nshared encoder model. Existing models have overlooked the intricate\nrelationships between features extracted during an encoding process of tasks.\nThis paper presents a new graph attention inter-block (GAI) module to the\nencoder/transmitter of a multi-task semantic communication system, which\nenriches the features for multiple tasks by embedding the intermediate outputs\nof encoding in the features, compared to the existing techniques. The key idea\nis that we interpret the outputs of the intermediate feature extraction blocks\nof the encoder as the nodes of a graph to capture the correlations of the\nintermediate features. Another important aspect is that we refine the node\nrepresentation using a graph attention mechanism to extract the correlations\nand a multi-layer perceptron network to associate the node representations with\ndifferent tasks. Consequently, the intermediate features are weighted and\nembedded into the features transmitted for executing multiple tasks at the\nreceiver. Experiments demonstrate that the proposed model surpasses the most\ncompetitive and publicly available models by 11.4% on the CityScapes 2Task\ndataset and outperforms the established state-of-the-art by 3.97% on the NYU V2\n3Task dataset, respectively, when the bandwidth ratio of the communication\nchannel (i.e., compression level for transmission over the channel) is as\nconstrained as 1 12 .\n
Yue PengKun QianGuojie SongFan Min
Youming SunLile WeiJiafeng WangHaiqiang ChenTianqiao ZhangXiangcheng Li
Jiangjing HuFengyu WangWenjun XuHui GaoPing Zhang