Distributed Edge Caching via Reinforcement Learning in Fog Radio Access Networks

In this paper, the distributed edge caching problem in fog radio access networks (F-RANs) is investigated. By considering the unknown spatio-temporal content popularity and user preference, a user request model based on hidden Markov process is proposed to characterize the fluctuant spatio-temporal traffic demands in F-RANs. Then, the Q-learning method based on the reinforcement learning (RL) framework is put forth to seek the optimal caching policy in a distributed manner, which enables fog access points (F-APs) to learn and track the potential dynamic process without extra communications cost. Furthermore, we propose a more efficient Q-learning method with value function approximation (Q-VFA-learning) to reduce complexity and accelerate convergence. Simulation results show that the performance of our proposed method is superior to those of the traditional methods.

Lu Liuyang, Jiang Yanxiang, Bennis Mehdi, Ding Zhiguo, Zheng Fu-Chun, You Xiaohu

A4 Article in conference proceedings

2019 IEEE 89th Vehicular Technology Conference (VTC Spring). 28 April – 1 May 2019, Kuala Lumpur, Malaysia

L. Lu, Y. Jiang, M. Bennis, Z. Ding, F. Zheng and X. You, "Distributed Edge Caching via Reinforcement Learning in Fog Radio Access Networks," 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 2019, pp. 1-6, https://doi.org/10.1109/VTCSpring.2019.8746321

https://doi.org/10.1109/VTCSpring.2019.8746321 http://urn.fi/urn:nbn:fi-fe2020050424733