Communication-Efficient Federated Deep Reinforcement Learning based Cooperative Edge Caching in Fog Radio Access Networks

In this paper, the cooperative edge caching problem is studied in fog radio access networks (F-RANs). Given the non-deterministic polynomial hard (NP-hard) nature of the problem, a dueling deep Q network (Dueling DQN) based caching update algorithm is proposed to make an optimal caching decision by learning the dynamic network environment. In order to protect user data privacy and solve the problem of slow convergence of the single deep reinforcement learning (DRL) model training, we propose a communication-efficient federated deep reinforcement learning (CE-FDRL) method to implement cooperative training of models from multiple fog access points (F-APs) in F-RANs. To address the excessive consumption of communication resources caused by model transmission, we propose to prune and quantize the shared DRL models to reduce the number of transferred model parameters. The communication interval is increased and the communication round is reduced by periodic model aggregation. The global convergence and computational complexity of our proposed method are also analyzed. Simulation results verify that our proposed method can offer better performance in reducing user request delay and improving cache hit rate and the transmitted parameters of our proposed method can drop to 60% compared to the existing benchmark schemes. Our proposed method is also shown to have faster training speed and higher communication efficiency.