首页 > 移动平台 > 详细

State Function Approximation: Linear Function

时间:2019-08-14 09:11:03      阅读:151      评论:0      收藏:0      [点我收藏+]

In the previous posts, we use different techniques to build and keep updating State-Action tables. But it is impossible to do the same thing when the number of states and actions get huge. So this post we gonna discuss about using a parameterized function to approximate the value function.

 

Basic Idea of State Function Approximation

Instead of looking up on a State-Action table, we build a black box with weights inside it. Just tell the blackbox whose value functions we want, and then it will calculate and output the value. The weights can be learned by data, which is a typical supervised learning problem.

技术分享图片

 

The input of the system is actually the feature of state S, so we need to do Feature Engineering (Feature Extraction) to represent the state S. X(s) is the feature vectore of state S.

技术分享图片

 

Linear Function Approximation with an Oracle

For the black box, we can use different models. In this post, we use Linear Function: inner product of features and weights

 技术分享图片

 

Assume we are cheatingnow, knowing the true value of the State Value function, then we can do Gradient Descent using Mean Square Error:

技术分享图片

技术分享图片

 

and SGD sample the gradient:

技术分享图片

 

Model-Free Value Function Approximation

Then we go back to reality, realizing the oracle does not help us, which means the only method we can count on is Model-Free algorithm. So we firstly use Monte Carlo, modifying the SGD equation to the following form:

技术分享图片

 

We can also use TD(0) Learning, the Cost Function is:

技术分享图片

the gradient is:

技术分享图片

The algorithm can be described as:

技术分享图片

 

Model-Free Control Based on State-Action Value Function Approximation

Same as state value function approximation, we extract features from our target problem, building a feature vector:

技术分享图片

Then the linear estimation for the Q-function is :

技术分享图片

 

To minimize the MSE cost function, we can get Monte Carlo gradient by taking derivative:

技术分享图片

SARSA gradient:

技术分享图片

Q-Learning gradient:

技术分享图片

References:

https://www.youtube.com/watch?v=buptHUzDKcE

https://www.youtube.com/watch?v=UoPei5o4fps&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ&index=6

State Function Approximation: Linear Function

原文:https://www.cnblogs.com/rhyswang/p/11326010.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!