首页 > 其他 > 详细

BP, Gradient descent and Generalisation

时间:2016-10-25 16:40:38      阅读:369      评论:0      收藏:0      [点我收藏+]

For each training pattern presented to a multilayer neural network, we can computer the error:

yd(p)-y(p)

Sum-Squared Error squaring and summing across all n patterns, SSE give a good measure of the overall performance of the network. 

SSE depends on weight and threshholds.

技术分享


Back-propagation

Back-propagation is a "gradient descent" training algorithm

技术分享

技术分享

Step:

1. Calculate error for a single patternm

2. Compute weight changes that will make the greatest change in error with error gradient(steepest slope)

only possible with differentiable activation functions(e.g. sigmoid)

技术分享技术分享

gradient descent only approximate training proceeds pattern-by pattern.

gradient descent may not always reach true global error minimum, otherwise it may get stuck in "local" minimum.

技术分享

solution: momentum term

技术分享

 

BP, Gradient descent and Generalisation

原文:http://www.cnblogs.com/eli-ayase/p/5906175.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!