Feng WangM. Cenk GursoySenem VelipasalarYalin E. Sagduyu
In this paper, we first present a deep reinforcement learning (deep RL) framework for network slicing in a dynamic environment. We propose three different deep RL algorithms, namely actor-critic, deep Q learning (DQN), and soft DQN, to select slices from the best recorded subset which is updated over time to adapt to the dynamic environment. We evaluate the performances of the proposed deep RL agents for network slicing and provide comparisons. Subsequently, we design intelligent jammers also as deep RL agents that significantly degrade the user's sum reward. Finally, we propose effective defensive measures to mitigate jamming attacks by determining the proper time instants to retrain the network slicing policy. Via simulations, we quantify the improvements in the performance with the defensive retraining.
Feng WangM. Cenk GursoySenem Velipasalar
Chen ZhongFeng WangM. Cenk GursoySenem Velipasalar
Feng WangChen ZhongM. Cenk GursoySenem Velipasalar
Yi ShiYalin E. SagduyuTugba ErpekM. Cenk Gursoy
Mohamed Amine MerzoukJoséphine DelasChristopher NealFrédéric CuppensNora CuppensReda Yaich