Abstract
Multi-agent systems deliver highly resilient and adaptable solutions for common problems in telecommunications, aerospace, and industrial robotics. However, achieving an optimal global goal remains a persistent obstacle for collaborative multi-agent systems, where learning affects the behaviour of more than one agent. A number of nonlinear function approximation methods have been proposed for solving the Bellman equation, which describe a recursive format of an optimal policy. However, how to leverage the value distribution based on reinforcement learning, and how to improve the efficiency and efficacy of such systems remain a challenge. In this work, we developed a reward-reinforced generative adversarial network to represent the distribution of the value function, replacing the approximation of Bellman updates. We demonstrated our method is resilient and outperforms other conventional reinforcement learning methods. This method is also applied to a practical case study: maximising the number of user connections to autonomous airborne base stations in a mobile communication network. Our method maximises the data likelihood using a cost function under which agents have optimal learned behaviours. This reward-reinforced generative adversarial network can be used as a generic framework for multi-agent learning at the system level.
Original language | English |
---|---|
Pages (from-to) | 479-488 |
Number of pages | 10 |
Journal | IEEE Transactions on Emerging Topics in Computational Intelligence |
Volume | 6 |
Issue number | 3 |
Early online date | 8 Jun 2021 |
DOIs | |
Publication status | Published - Jun 2022 |
Keywords
- Base stations
- GAN
- Generative adversarial networks
- Generators
- Mathematical model
- Multi-agent systems
- Reinforcement learning
- Training
- airborne base station (ABS)
- multi-agent
- reinforcement learning
- reward-reinforced GAN