Event-triggered Learning

  A graph describing Event triggered Learning Copyright: © Friedrich Solowjow / MPI-IS

Abstraction of the event-triggered learning framework. Structured decisions about when to learn are obtained based on a comparison of a model-based reference signal and incoming data from the system.

 

Contact

Name

Friedrich Solowjow

Email

E-Mail
 

The ability to learn is an essential aspect of future intelligent systems that are facing uncertain environments. However, the process of learning a new model or behavior often does not come for free, but involves a certain cost. For example, gathering informative data can be challenging due to physical limitations, or updating models can require substantial computation. Moreover, learning for autonomous agents often requires exploring new behavior and thus typically means deviating from nominal or desired behavior. Hence, the question of when to learn is essential for the efficient and intelligent operation of autonomous systems.

Event-triggered learning, ETL for short, was the first time proposed in our publication Automatica 2020 for making principled decisions on when to learn new dynamics models and applied for efficient communication in distributed systems. Information exchange in distributed systems is a key aspect in solving collaborative tasks. Communication often takes place over wireless networks, and therefore, needs to be used carefully to avoid overloading the network. Dynamical models are deployed to predict other agents' behavior and therefore, accurate models are essential to reduce communication effectively. We developed a stochastic trigger to decide when to learn a new model and derive statistical guarantees that the triggering happens at the right time. In a collaboration with TU Berlin, we have validated the proposed method experimentally on IMU sensor networks. As you can read in our publication L-CSS 2020.

While effectively reducing communication in distributed systems, the developed ideas are more general and address in their core the question when to learn with possible extensions in different directions. Among others, in our paper TAC-CSS 2020 we investigate generalization to cost signals and in arXiv 2020 to nonlinear systems. Using different performance signals to perform structured learning decisions is an important aspect of ongoing research, which also connects to reinforcement learning, where the exploration-exploitation trade-off is a closely related manifestation of the same problem.