This work provides a rigorous framework for studying continuous-time control problems in uncertain environments. The framework models uncertainty in state dynamics as a probability measure on the space of functions. Such a probability measure is permitted to change over time as agents learn about their environment. This model can be seen as a variant of either Bayesian reinforcement learning (RL) or adaptive optimal control. We study conditions for locally optimal trajectories within this model, in particular deriving an appropriate dynamic programming principle and Hamilton–Jacobi equations. Some discussion of variants of the model are also provided, including one potential framework for studying the tradeoff between exploration and exploitation in RL.
|Titolo:||A model for system uncertainty in reinforcement learning|
|Data di pubblicazione:||2018|
|Appare nelle tipologie:||1.1 Articolo in rivista|