Safe Value Functions



Pierre-François Massiani, Steve Heim, Friedrich Solowjow, Sebastian Trimpe

  Visualization of Safe Value Functions Urheberrecht: © Pierre-François Massiani Value function of an optimal control problem for different values of the penalty. When the penalty passes a threshold (green curve), failure is no longer attractive, and the optimal controller remains safe.


Safety constraints and optimality are important but sometimes conflicting criteria for controllers. Although these criteria are often solved separately with different tools to maintain formal guarantees, it is also common practice in reinforcement learning to simply modify reward functions by penalizing failures, with the penalty treated as a mere heuristic. We rigorously examine the relationship of both safety and optimality to penalties, and formalize sufficient conditions for safe value functions (SVFs): value functions that are both optimal for a given task, and enforce safety constraints. We reveal this structure by examining when rewards preserve viability under optimal control, and show that there always exists a finite penalty that induces a safe value function. This penalty is not unique, but upper-unbounded: larger penalties do not harm optimality. Although it is often not possible to compute the minimum required penalty, we reveal clear structure of how the penalty, rewards, discount factor, and dynamics interact. This insight suggests practical, theory-guided heuristics to design reward functions for control problems where safety is important.

Accepted for publication in the Transactions of Automatic Control (TAC), special issue on Learning and Control (2023).


IEEE Xplore