Reward-Reinforced Generative Adversarial Networks for Multi-agent Systems

Changgang Zheng, Chen Zhen, Shufan Yan, Juan Parra, Antonio Garcia-Dominguez, Nelly Bencomo

Research output: Contribution to journalReview articlepeer-review

Abstract

Multi-agent systems deliver highly resilient and adaptable solutions for common problems in telecommunications, aerospace, and industrial robotics. However, achieving an optimal global goal remains a persistent obstacle for collaborative multi-agent systems, where learning affects the behaviour of more than one agent. A number of nonlinear function approximation methods have been proposed for solving the Bellman equation, which describe a recursive format of an optimal policy. However, how to leverage the value distribution based on reinforcement learning, and how to improve the efficiency and efficacy of such systems remain a challenge. In this work, we developed a reward-reinforced generative adversarial network to represent the distribution of the value function, replacing the approximation of Bellman updates. We demonstrated our method is resilient and outperforms other conventional reinforcement learning methods. This method is also applied to a practical case study: maximising the number of user connections to autonomous airborne base stations in a mobile communication network. Our method maximises the data likelihood using a cost function under which agents have optimal learned behaviours. This reward-reinforced generative adversarial network can be used as a generic framework for multi-agent learning at the system level.
Original languageEnglish
Number of pages9
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Publication statusAccepted/In press - 26 Apr 2021

Fingerprint

Dive into the research topics of 'Reward-Reinforced Generative Adversarial Networks for Multi-agent Systems'. Together they form a unique fingerprint.

Cite this