JOURNAL ARTICLE

Deep Reinforcement Learning for Fresh Data Collection in UAV-assisted IoT Networks

Abstract

Due to the flexibility and low operational cost, dispatching unmanned aerial vehicles (UAVs) to collect information from distributed sensors is expected to be a promising solution in Internet of Things (IoT), especially for time-critical applications. How to maintain the information freshness is a challenging issue. In this paper, we investigate the fresh data collection problem in UAV-assisted IoT networks. Particularly, the UAV flies towards the sensors to collect status update packets within a given duration while maintaining a non-negative residual energy. We formulate a Markov Decision Process (MDP) to find the optimal flight trajectory of the UAV and transmission scheduling of the sensors that minimizes the weighted sum of the age of information (AoI). A UAV-assisted data collection algorithm based on deep reinforcement learning (DRL) is further proposed to overcome the curse of dimensionality. Extensive simulation results demonstrate that the proposed DRL-based algorithm can significantly reduce the weighted sum of the AoI compared to other baseline algorithms.

Keywords:
Computer science Markov decision process Reinforcement learning Curse of dimensionality Network packet Internet of Things Real-time computing Flexibility (engineering) Scheduling (production processes) Data collection Wireless sensor network Drone Artificial intelligence Markov process Machine learning Computer network Mathematical optimization Embedded system

Metrics

112
Cited By
13.39
FWCI (Field Weighted Citation Impact)
17
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Age of Information Optimization
Physical Sciences →  Computer Science →  Computer Networks and Communications
IoT Networks and Protocols
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
UAV Applications and Optimization
Physical Sciences →  Engineering →  Aerospace Engineering
© 2026 ScienceGate Book Chapters — All rights reserved.