Cooperative perception in Vehicular Networks using Multi-Agent Reinforcement Learning
Cooperative perception plays a vital role in extending a vehicle’s sensing range beyond its line-of-sight. However, exchanging raw sensory data under limited communication resources is infeasible. Towards enabling an efficient cooperative perception, vehicles need to address fundamental questions such as: what sensory data needs to be shared? at which resolution? In this view, this paper proposes a reinforcement learning (RL)-based content selection of cooperative perception messages by utilizing a quadtree-based point cloud compression mechanism. Furthermore, we investigate the role of federated RL to enhance the training process. Simulation results show the ability of the RL agents to efficiently learn the message content selection that maximizes the satisfaction of the vehicles in terms of the received sensory information. It is also shown that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.