Deep Reinforcement Based Optimization of Function Splitting in Virtualized Radio Access Networks
Virtualized Radio Access Network (vRAN) is one of the key enablers of future wireless networks as it brings the agility to the radio access network (RAN) architecture and offers degrees of design freedom. Yet, it also creates a challenging problem on how to design the functional split configuration. In this paper, a deep reinforcement learning approach is proposed to optimize function splitting in vRAN. A learning paradigm is developed that optimizes the location of functions in the RAN. These functions can be placed either at a central/cloud unit (CU) or a distributed unit (DU). This problem is formulated as constrained neural combinatorial reinforcement learning to minimize the total network cost. In this solution, a policy gradient method with Lagrangian relaxation is applied that uses a stacked long short-term memory (LSTM) neural network architecture to approximate the policy. Then, a sampling technique with a temperature hyperparameter is applied for the inference process. The results show that our proposed solution can learn the optimal function split decision and solve the problem with a 0.4% optimality gap. Moreover, our method can reduce the cost by up to 320% compared to a distributed-RAN (D-RAN). We also conclude that altering the traffic load and routing cost does not significantly degrade the optimality performance.