首页 > 其他 > 详细

总结:Different Methods for Weight Initialization in Deep Learning

时间:2015-07-27 22:57:07      阅读:356      评论:0      收藏:0      [点我收藏+]

这里总结了三种权重的初始化方法,前两种比较常见,后一种是最新的。为了表达顺畅(当时写给一个歪果仁看的),用了英文,欢迎补充和指正。

尊重原创,转载请注明:http://blog.csdn.net/tangwei2014


1. Gaussian


Weights are randomly drawn from Gaussian distributions with fixed mean (e.g., 0) and fixed standard deviation (e.g., 0.01). 

This is the most common initialization method in deep learning.


2. Xavier


This method proposes to adopt a properly scaled uniform or Gaussian distribution for initialization.

In Caffe (an openframework for deep learning) [2], It initializes the weights in network by drawing them from a distribution with zero mean and a specific variance,

                                              技术分享

Where W  is the initialization distribution for the neuron in question, and   n_in is the number of neurons feeding into it. The distribution used is typically Gaussian or uniform.

In Glorot & Bengio’s paper [1], itoriginally recommended using

                                                  技术分享

Where n_out is the number of neurons the result is fed to.

Reference:

[1] X. Glorot and Y. Bengio. Understanding the difficulty of training deepfeedforward neural networks. In International Conference on Artificial Intelligence and Statistics, pages 249–256, 2010.

[2] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S.Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast featureembedding. arXiv:1408.5093, 2014.


3. MSRA


This method is proposed to solve the training of extremely deep rectified models directly from scratch [1].

In this method,weights are initialized with a zero-mean Gaussian distribution whose std is

                                                  技术分享

Where 技术分享 is the spatial filter size in layer l and d_l?1 is the number of filters in layer l?1.

Reference:
[1]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, Technical report, arXiv, Feb. 2015

版权声明:本文为博主原创文章,未经博主允许不得转载。

总结:Different Methods for Weight Initialization in Deep Learning

原文:http://blog.csdn.net/tangwei2014/article/details/47091881

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!