Feng WangM. Cenk GursoySenem Velipasalar
In this paper, we consider multi-agent deep reinforcement learning (deep RL) based network slicing agents in a dynamic environment with multiple base stations and multiple users. We develop a deep RL based jammer with limited prior information and limited power budget. The goal of the jammer is to minimize the transmission rates achieved with network slicing and thus degrade the network slicing agents' performance. We design a jammer with both listening and jamming phases and address jamming location optimization as well as jamming channel optimization via deep RL. We evaluate the jammer at the optimized location, generating interference attacks in the optimized set of channels by switching between the jamming phase and listening phase. We show that the proposed jammer can significantly reduce the victims' performance without direct feedback or prior knowledge on the network slicing policies.
Feng WangM. Cenk GursoySenem VelipasalarYalin E. Sagduyu
Haitham H. EsmatBeatriz Lorenzo
Yi ShiYalin E. SagduyuTugba ErpekM. Cenk Gursoy
Chen ZhongFeng WangM. Cenk GursoySenem Velipasalar