Data collection in deep reinforcement learning-enhanced reconfigurable intelligent surface-assisted wireless networks
Article
Ertas, I. and Yetgin, H. 2025. Data collection in deep reinforcement learning-enhanced reconfigurable intelligent surface-assisted wireless networks. Engineering Applications of Artificial Intelligence.
Type | Article |
---|---|
Title | Data collection in deep reinforcement learning-enhanced reconfigurable intelligent surface-assisted wireless networks |
Authors | Ertas, I. and Yetgin, H. |
Abstract | In the evolving sixth generation (6G) landscape, the integration of reconfigurable intelligent surfaces (RIS) with unmanned aerial vehicles (UAVs) offers a revolutionary opportunity to optimise data collection in the Internet of things (IoT) through deep reinforcement learning (DRL) and improve energy efficiency and network performance. This paper aims to study how reconfigurable intelligent surfaces and deep reinforcement learning can help increase throughput and energy efficiency in unmanned aerial vehicle-controlled Internet of things networks. The focus is on improving the capabilities of unmanned aerial vehicles to efficiently collect data in different regions and ensure safe landings. Divided into two phases, the study first improves the directional capacity and flexibility of unmanned aerial vehicles and then evaluates the integration of reconfigurable intelligent surface technology. We introduce two deep reinforcement learning models, namely the directional capacity and flexible reconnaissance (DCFR) model and the reconfigurable intelligent surface model, and compare them with a benchmark model. We found significant improvements in communication and data collection efficiency. The simulation results show an 8.18% increase in data collection performance and a 6.92% increase in collected data per unit energy when using reconfigurable intelligent surfaces, with a 10.59% increase in collection performance and a 22.64% increase in energy efficiency. Furthermore, an unmanned aerial vehicle optimised with the double deep Q-network algorithm effectively identified optimal trajectories for data collection, confirming the significant benefits of reconfigurable intelligent surfaces in unmanned aerial vehicle-controlled Internet of things networks. |
Keywords | deep reinforcement learning; data collection; internet of things; reconfigurable intelligent surfaces; unmanned aerial vehicle |
Sustainable Development Goals | 11 Sustainable cities and communities |
13 Climate action | |
Middlesex University Theme | Sustainability |
Publisher | Elsevier |
Journal | Engineering Applications of Artificial Intelligence |
ISSN | |
Electronic | 0952-1976 |
Publication process dates | |
Accepted | 19 Apr 2025 |
Deposited | 24 Apr 2025 |
Output status | Accepted |
Accepted author manuscript | File Access Level Open |
https://repository.mdx.ac.uk/item/2378qy
Restricted files
Accepted author manuscript
20
total views5
total downloads20
views this month5
downloads this month