首页 > 其他 > 详细

[强化学习论文笔记(7)]:DPG

时间:2020-01-03 20:44:08      阅读:145      评论:0      收藏:0      [点我收藏+]

Deterministic Policy Gradient Algorithms

论文地址

DPG

笔记

出发点

首先最开始提出的policy gradient 算法是 stochastic的。

这里的随机是指随机策略\(\pi_\theta(a|s)=P[a|s,;\theta]\). 但是随机策略在高维连续动作空间上可能会有问题,毕竟要考虑当前状态下所有的动作带来的不同的影响,需要更多的(s,a)的数据来形成更准确的判断

但是对于确定性策略\(a=\mu_theta(s)\). 过去,认为这样是不可行的,原因待补充。(一个显而易见的原因就是不够explore)

本文就冒天下之大不韪,提出了deterministic policy gradient ,也就是DPG

文章用的off-polcy 用一个stochasitic behavior policy来选择动作,然后学习一个determinisitic target policy.

policy gradient

\[J(\pi_\theta)=\int_S \rho^\pi(s)\int_A \pi_\theta (s,a)r(s,a)dads=E_{s\sim \rho^\pi ,a\sim \pi_\theta}[r(s,a)]\]

\(\rho^\pi(s') = \int_S \sum_{t=1}^{\infty}p_1(s)p(s\to s',t,\pi)ds\)

stochastic policy gradient

policy gradient theorem:

\[\nabla_\theta J(\pi_\theta)=\int_S \rho^\pi(s)\int_A \nabla_\theta \pi_\theta (s,a)Q^\pi(s,a)dads=E_{s\sim \rho^\pi ,a\sim \pi_\theta}[\nabla_\theta log \pi_\theta(s,a)Q^\pi(s,a)]\]

stochastic Actor-Critic algorithm

critic 通过TD的方式估计 action-value function \(Q^w(s,a)=Q^\pi(s,a)\)

\[\nabla_\theta J(\pi_\theta)=\int_S \rho^\pi(s)\int_A \nabla_\theta \pi_\theta (s,a)Q^w(s,a)dads=E_{s\sim \rho^\pi ,a\sim \pi_\theta}[\nabla_\theta log \pi_\theta(s,a)Q^w(s,a)]\]

Off-policy AC

behavior policy \(\beta(a|s)\neq \pi_\theta(a|s)\)

\[J_\beta(\pi_\theta)=\int_S \rho^\beta(s)V^\pi(s)ds=\int_S \int_A \rho^\beta \pi_\theta (s,a)Q^\pi(s,a)dads\]

\[\nabla_\theta J_\beta(\pi_\theta)=\int_S \int_A \rho^\beta(s)\nabla_\theta \pi_\theta (s,a)Q^\pi(s,a)dads=E_{s\sim \rho^\beta ,a\sim \beta}[\frac{\pi_\theta(a|s)}{\beta_\theta(a|s)} \nabla_\theta log \pi_\theta(s,a)Q^\pi(s,a)]\]

DPG

model free RL 算法通常都是基于GPI(generalised policy iteration: policy evaluation with polcy improvement)。

在连续的动作空间上policy improvement 通过greedy 的方式找到global maxmisation Q 不太可行,所以就不直接找全局最大,而是向全局最大移动。

用公式表示。对于确定性策略\(a = \mu(s)\)

以前的policy improvement:

\(\mu^{k+1}(s)=\underset{a}{max}Q^{\mu^k}(s,a)\)

既然这样找全局最大不可行,我们

\[\theta^{k+1}=\theta^{k}+\alpha E_{s\sim\rho^{u^k}}[\nabla_\theta Q^{\mu^k}(s,\mu_\theta(s))]\]

\[\theta^{k+1}=\theta^{k}+\alpha E_{s\sim\rho^{u^k}}[\nabla_\theta \mu_\theta(s)\nabla_a Q^{\mu^k}(s,a)|_{a=\mu_\theta(s)}]\]

存在的问题,policy 改变,\(\rho^\mu\)就会变,不能看出来是否有policy improvement

Deterministic Policy Gradient Theorem

\[J(\mu_\theta)=\int_S \rho^\mu(s) r(s,\mu_\theta(s))ds=E_{s\sim \rho^\mu}[r(s,\mu_\theta(s))]\]

\[\nabla_\theta J(\mu_\theta)=\int_S \rho^\mu(s) \nabla_\theta \mu_\theta (s) \nabla_a Q^\mu(s,a)|_{a=\mu_theta(s)}ds=E_{s\sim \rho^\mu}[\nabla_\theta \mu_\theta(s) \nabla_a Q^\mu(s,a)|_{a=\mu_theta(s)}]\]

[强化学习论文笔记(7)]:DPG

原文:https://www.cnblogs.com/Lzqayx/p/12146530.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!