Improving the Performance of Robust Control through Event-Triggered Learning



Alexander von Rohr, Friedrich Solowjow, Sebastian Trimpe

  Overview of the event-triggered learning framework. Urheberrecht: © Alexander von Rohr Overview of the event-triggered learning framework. Initially, the current parameters of the system (A,B) have high uncertainty (left). Learning improves control performance by reducing uncertainty (right).


Robust controllers ensure stability in feedback loops designed under uncertainty but at the cost of performance. Model uncertainty in time-invariant systems can be reduced by recently proposed learning-based methods, thus improving the performance of robust controllers using data. However, in practice, many systems also exhibit uncertainty in the form of changes over time, e.g., due to weight shifts or wear and tear, leading to decreased performance or instability of the learning-based controller. We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem with rare or slow changes. Our key idea is to switch between robust and learned controllers. For learning, we first approximate the optimal length of the learning phase via Monte-Carlo estimations using a probabilistic model. We then design a statistical test for uncertain systems based on the moment-generating function of the LQR cost. The test detects changes in the system under control and triggers re-learning when control performance deteriorates due to system changes. We demonstrate improved performance over a robust controller baseline in a numerical example.

To be presented at the 61st IEEE Conference on Decision and Control.