首页 > 其他 > 详细

tensorflow Optimizer.minimize()和gradient clipping

时间:2018-07-16 18:54:29      阅读:532      评论:0      收藏:0      [点我收藏+]

在tensorflow中通常使用下述方法对模型进行训练

# 定义Optimizer
opt = tf.train.AdamOptimizer(lr)
# 定义train
train = opt.minimize(loss)

for i in range(100):
    sess.run(train)

train指向的是tf.Graph中关于训练的节点,其中opt.minimize(loss)相当不直观,它相当于

# Compute the gradients for a list of variables.
grads_and_vars = opt.compute_gradients(loss, <list of variables>)

# grads_and_vars is a list of tuples (gradient, variable).  

# Ask the optimizer to apply the gradients.
opt.apply_gradients(grads_and_vars)

即建立了求梯度的节点和optimizer根据梯度对变量进行修改的节点

因此,可以通过下述方法对梯度进行修改

grads_and_vars = opt.compute_gradients(loss, <list of variables>)
capped_grads_and_vars = [(MyCapper(grad), var) for grad, var in grads_and_vars]
opt.apply_gradients(capped_grads_and_vars)

举两个例子

# tf.clip_by_value(
#     t,
#     clip_value_min,
#     clip_value_max,
#     name=None
# )

grads_and_vars = opt.compute_gradients(loss)
capped_grads_and_vars = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in grads_and_vars]
opt.apply_gradients(capped_grads_and_vars)
# tf.clip_by_global_norm(
#     t_list,
#     clip_norm,
#     use_norm=None,
#     name=None
# )
# Returns:
#     list_clipped: A list of Tensors of the same type as list_t.
#     global_norm: A 0-D (scalar) Tensor representing the global norm.

opt = tf.train.AdamOptimizer(lr)
grads, vars = zip(*opt.compute_gradients(loss))
grads, _ = tf.clip_by_global_norm(grads, 5.0)
train = opt.apply_gradients(zip(grads, vars))

 

tensorflow Optimizer.minimize()和gradient clipping

原文:https://www.cnblogs.com/esoteric/p/9319266.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!