Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control



Lukas Kesper, Sebastian Trimpe, Dominik Baumann

  Overview of the proposed method. Urheberrecht: © Lukas Kesper Basic components of the setting and image of the simulation environment. Left: Agents connected via a network, here for two agents, interacting in an environment. Right: Simulation environment.


Event-triggered communication and control provide high control performance in networked control systems without overloading the communication network. However, most approaches require precise mathematical models of the system dynamics, which may not always be available. Model-free learning of communication and control policies provides an alternative. Nevertheless, existing methods typically consider single-agent settings. This paper proposes a model-free reinforcement learning algorithm that jointly learns resource-aware communication and control policies for distributed multi-agent systems from data. We evaluate the algorithm in a high-dimensional and nonlinear simulation example and discuss promising avenues for further research.

Presented at the 5th Annual Learning for Dynamics and Control Conference (L4DC).




Project website