We deal with the convergence of the value function of an approximate control problem with uncertain dynamics to the value function of a nonlinear optimal control problem. The assumptions on the dynamics and the costs are rather general and we assume to represent uncertainty in the dynamics by a probability distribution. The proposed framework aims to describe and motivate some model-based Reinforcement Learning algorithms where the model is probabilistic. We also show some numerical experiments which confirm the theoretical results.
Convergence of the Value Function in Optimal Control Problems with Unknown Dynamics
Palladino M.;
2021-01-01
Abstract
We deal with the convergence of the value function of an approximate control problem with uncertain dynamics to the value function of a nonlinear optimal control problem. The assumptions on the dynamics and the costs are rather general and we assume to represent uncertainty in the dynamics by a probability distribution. The proposed framework aims to describe and motivate some model-based Reinforcement Learning algorithms where the model is probabilistic. We also show some numerical experiments which confirm the theoretical results.File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.