Data collection in deep reinforcement learning-enhanced reconfigurable intelligent surface-assisted wireless networks
Article
Ertas, I. and Yetgin, H. 2025. Data collection in deep reinforcement learning-enhanced reconfigurable intelligent surface-assisted wireless networks. Engineering Applications of Artificial Intelligence. 155. https://doi.org/10.1016/j.engappai.2025.110952
Type | Article |
---|---|
Title | Data collection in deep reinforcement learning-enhanced reconfigurable intelligent surface-assisted wireless networks |
Authors | Ertas, I. and Yetgin, H. |
Abstract | In the evolving sixth generation (6G) landscape, the integration of reconfigurable intelligent surfaces (RIS) with unmanned aerial vehicles (UAVs) offers a revolutionary opportunity to optimise data collection in the Internet of things (IoT) through deep reinforcement learning (DRL) and improve energy efficiency and network performance. This paper aims to study how reconfigurable intelligent surfaces and deep reinforcement learning can help increase throughput and energy efficiency in unmanned aerial vehicle-controlled Internet of things networks. The focus is on improving the capabilities of unmanned aerial vehicles to efficiently collect data in different regions and ensure safe landings. Divided into two phases, the study first improves the directional capacity and flexibility of unmanned aerial vehicles and then evaluates the integration of reconfigurable intelligent surface technology. We introduce two deep reinforcement learning models, namely the directional capacity and flexible reconnaissance (DCFR) model and the reconfigurable intelligent surface model, and compare them with a benchmark model. We found significant improvements in communication and data collection efficiency. The simulation results show an 8.18% increase in data collection performance and a 6.92% increase in collected data per unit energy when using reconfigurable intelligent surfaces, with a 10.59% increase in collection performance and a 22.64% increase in energy efficiency. Furthermore, an unmanned aerial vehicle optimised with the double deep Q-network algorithm effectively identified optimal trajectories for data collection, confirming the significant benefits of reconfigurable intelligent surfaces in unmanned aerial vehicle-controlled Internet of things networks. |
Keywords | deep reinforcement learning; data collection; internet of things; reconfigurable intelligent surfaces; unmanned aerial vehicle; artificial intelligence |
Sustainable Development Goals | 11 Sustainable cities and communities |
13 Climate action | |
Middlesex University Theme | Sustainability |
Research Group | Research Group on Development of Intelligent Environments |
Publisher | Elsevier |
Journal | Engineering Applications of Artificial Intelligence |
ISSN | |
Electronic | 0952-1976 |
Publication dates | |
Online | 09 May 2025 |
01 Sep 2025 | |
Publication process dates | |
Submitted | 14 May 2024 |
Accepted | 19 Apr 2025 |
Deposited | 24 Apr 2025 |
Output status | Published |
Publisher's version | License File Access Level Open |
Copyright Statement | Crown Copyright © 2025 Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ). |
Digital Object Identifier (DOI) | https://doi.org/10.1016/j.engappai.2025.110952 |
https://repository.mdx.ac.uk/item/2378qy
Download files
102
total views7
total downloads56
views this month2
downloads this month